Test Report: QEMU_macOS 19648

                    
                      584241d6059a856bd6609ebe9456581adc627cea:2024-09-17:36253
                    
                

Test fail (95/270)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.55
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.13
33 TestAddons/parallel/Registry 71.42
46 TestCertOptions 10.12
47 TestCertExpiration 195.34
48 TestDockerFlags 10.29
49 TestForceSystemdFlag 10.43
50 TestForceSystemdEnv 12.53
95 TestFunctional/parallel/ServiceCmdConnect 36.63
167 TestMultiControlPlane/serial/StopSecondaryNode 312.33
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.13
169 TestMultiControlPlane/serial/RestartSecondaryNode 305.26
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.58
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 234.02
177 TestImageBuild/serial/Setup 10.13
180 TestJSONOutput/start/Command 9.8
186 TestJSONOutput/pause/Command 0.08
192 TestJSONOutput/unpause/Command 0.04
209 TestMinikubeProfile 10.19
212 TestMountStart/serial/StartWithMountFirst 10.05
215 TestMultiNode/serial/FreshStart2Nodes 10.25
216 TestMultiNode/serial/DeployApp2Nodes 80.64
217 TestMultiNode/serial/PingHostFrom2Pods 0.09
218 TestMultiNode/serial/AddNode 0.07
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.08
221 TestMultiNode/serial/CopyFile 0.06
222 TestMultiNode/serial/StopNode 0.14
223 TestMultiNode/serial/StartAfterStop 38.37
224 TestMultiNode/serial/RestartKeepsNodes 8.9
225 TestMultiNode/serial/DeleteNode 0.1
226 TestMultiNode/serial/StopMultiNode 2.12
227 TestMultiNode/serial/RestartMultiNode 5.26
228 TestMultiNode/serial/ValidateNameConflict 20.32
232 TestPreload 9.97
234 TestScheduledStopUnix 9.96
235 TestSkaffold 12.79
238 TestRunningBinaryUpgrade 605.66
240 TestKubernetesUpgrade 18.54
253 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.2
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.65
256 TestStoppedBinaryUpgrade/Upgrade 583.46
258 TestPause/serial/Start 9.98
268 TestNoKubernetes/serial/StartWithK8s 9.96
269 TestNoKubernetes/serial/StartWithStopK8s 5.31
270 TestNoKubernetes/serial/Start 5.27
274 TestNoKubernetes/serial/StartNoArgs 5.33
276 TestNetworkPlugins/group/auto/Start 9.85
277 TestNetworkPlugins/group/flannel/Start 9.91
278 TestNetworkPlugins/group/kindnet/Start 10.03
279 TestNetworkPlugins/group/enable-default-cni/Start 9.84
280 TestNetworkPlugins/group/bridge/Start 9.96
281 TestNetworkPlugins/group/kubenet/Start 9.82
282 TestNetworkPlugins/group/custom-flannel/Start 9.73
283 TestNetworkPlugins/group/calico/Start 9.81
284 TestNetworkPlugins/group/false/Start 9.91
287 TestStartStop/group/old-k8s-version/serial/FirstStart 9.93
288 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
289 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
292 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
293 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
294 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
295 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
296 TestStartStop/group/old-k8s-version/serial/Pause 0.1
298 TestStartStop/group/no-preload/serial/FirstStart 9.85
299 TestStartStop/group/no-preload/serial/DeployApp 0.09
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
303 TestStartStop/group/embed-certs/serial/FirstStart 10.03
305 TestStartStop/group/no-preload/serial/SecondStart 6.61
306 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
307 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
309 TestStartStop/group/no-preload/serial/Pause 0.1
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.59
312 TestStartStop/group/embed-certs/serial/DeployApp 0.1
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
316 TestStartStop/group/embed-certs/serial/SecondStart 6.61
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
318 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
319 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
321 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
322 TestStartStop/group/embed-certs/serial/Pause 0.11
325 TestStartStop/group/newest-cni/serial/FirstStart 10.02
327 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.45
330 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
331 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
333 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.06
334 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
336 TestStartStop/group/newest-cni/serial/SecondStart 5.26
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
340 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (18.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-459000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-459000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (18.544701833s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fd78ad2a-a596-4a59-ad91-40744b9ab8d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-459000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"99f0835e-6944-4199-b2ba-d0a3b0cfba12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"6ce07e24-0fa2-4304-8d5b-1771b3e21a25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig"}}
	{"specversion":"1.0","id":"b031e0b6-499d-438f-9d57-302045e8eaff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"bca10d67-d084-468e-a154-8c54e5064434","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f72894c5-1252-4208-8559-3e4a5c9bfac4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube"}}
	{"specversion":"1.0","id":"ed37754c-9752-40ec-afad-9df4633b8aea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"393d25cb-e185-42c4-866e-fe5760776fa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb1bdf24-fd16-4ce2-b7d4-2ea12147f9c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"5a0f0626-1cb6-45a9-bf67-d1978867b116","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1cfbca29-1cff-4c96-bd6a-8dd147ecdeac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-459000\" primary control-plane node in \"download-only-459000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"68d2d816-8ea8-41ba-940f-9423a0e77576","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"614295f8-2f90-4302-8421-84e851b06b44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0] Decompressors:map[bz2:0x14000681010 gz:0x14000681018 tar:0x14000680fc0 tar.bz2:0x14000680fd0 tar.gz:0x14000680fe0 tar.xz:0x14000680ff0 tar.zst:0x14000681000 tbz2:0x14000680fd0 tgz:0x14
000680fe0 txz:0x14000680ff0 tzst:0x14000681000 xz:0x14000681020 zip:0x14000681030 zst:0x14000681028] Getters:map[file:0x14001516360 http:0x140007660a0 https:0x140007660f0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"cd9c95b8-b73c-444b-827a-6e2bcd9827e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:37:24.831456    1557 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:37:24.831617    1557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:24.831621    1557 out.go:358] Setting ErrFile to fd 2...
	I0917 01:37:24.831623    1557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:24.831735    1557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	W0917 01:37:24.831823    1557 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19648-1056/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19648-1056/.minikube/config/config.json: no such file or directory
	I0917 01:37:24.833192    1557 out.go:352] Setting JSON to true
	I0917 01:37:24.853545    1557 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":414,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 01:37:24.853610    1557 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:37:24.858208    1557 out.go:97] [download-only-459000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 01:37:24.858355    1557 notify.go:220] Checking for updates...
	W0917 01:37:24.858377    1557 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 01:37:24.861250    1557 out.go:169] MINIKUBE_LOCATION=19648
	I0917 01:37:24.864301    1557 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 01:37:24.868229    1557 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 01:37:24.871211    1557 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:37:24.874228    1557 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	W0917 01:37:24.879286    1557 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 01:37:24.879519    1557 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:37:24.883198    1557 out.go:97] Using the qemu2 driver based on user configuration
	I0917 01:37:24.883221    1557 start.go:297] selected driver: qemu2
	I0917 01:37:24.883236    1557 start.go:901] validating driver "qemu2" against <nil>
	I0917 01:37:24.883316    1557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 01:37:24.886225    1557 out.go:169] Automatically selected the socket_vmnet network
	I0917 01:37:24.891960    1557 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0917 01:37:24.892050    1557 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 01:37:24.892095    1557 cni.go:84] Creating CNI manager for ""
	I0917 01:37:24.892144    1557 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 01:37:24.892194    1557 start.go:340] cluster config:
	{Name:download-only-459000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:37:24.897269    1557 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:37:24.902190    1557 out.go:97] Downloading VM boot image ...
	I0917 01:37:24.902204    1557 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso
	I0917 01:37:32.511796    1557 out.go:97] Starting "download-only-459000" primary control-plane node in "download-only-459000" cluster
	I0917 01:37:32.511824    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 01:37:32.603171    1557 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 01:37:32.603199    1557 cache.go:56] Caching tarball of preloaded images
	I0917 01:37:32.603436    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 01:37:32.607681    1557 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0917 01:37:32.607689    1557 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 01:37:32.700044    1557 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 01:37:41.832399    1557 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 01:37:41.832581    1557 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 01:37:42.527076    1557 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 01:37:42.527278    1557 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/download-only-459000/config.json ...
	I0917 01:37:42.527296    1557 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/download-only-459000/config.json: {Name:mk627dcd15406011f4f6d1943d972dd426926a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:37:42.527514    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 01:37:42.527759    1557 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0917 01:37:43.302316    1557 out.go:193] 
	W0917 01:37:43.308767    1557 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0] Decompressors:map[bz2:0x14000681010 gz:0x14000681018 tar:0x14000680fc0 tar.bz2:0x14000680fd0 tar.gz:0x14000680fe0 tar.xz:0x14000680ff0 tar.zst:0x14000681000 tbz2:0x14000680fd0 tgz:0x14000680fe0 txz:0x14000680ff0 tzst:0x14000681000 xz:0x14000681020 zip:0x14000681030 zst:0x14000681028] Getters:map[file:0x14001516360 http:0x140007660a0 https:0x140007660f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0917 01:37:43.308797    1557 out_reason.go:110] 
	W0917 01:37:43.316623    1557 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:37:43.320725    1557 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-459000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (18.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-152000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-152000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.981989209s)

                                                
                                                
-- stdout --
	* [offline-docker-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-152000" primary control-plane node in "offline-docker-152000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-152000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:31:34.349165    3950 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:31:34.349301    3950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:31:34.349305    3950 out.go:358] Setting ErrFile to fd 2...
	I0917 02:31:34.349307    3950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:31:34.349422    3950 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:31:34.350431    3950 out.go:352] Setting JSON to false
	I0917 02:31:34.368358    3950 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3664,"bootTime":1726561830,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:31:34.368428    3950 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:31:34.374141    3950 out.go:177] * [offline-docker-152000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:31:34.382113    3950 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:31:34.382111    3950 notify.go:220] Checking for updates...
	I0917 02:31:34.388984    3950 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:31:34.392077    3950 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:31:34.395163    3950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:31:34.396285    3950 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:31:34.399094    3950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:31:34.402509    3950 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:31:34.402581    3950 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:31:34.405928    3950 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:31:34.413085    3950 start.go:297] selected driver: qemu2
	I0917 02:31:34.413096    3950 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:31:34.413104    3950 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:31:34.415233    3950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:31:34.418099    3950 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:31:34.421254    3950 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:31:34.421270    3950 cni.go:84] Creating CNI manager for ""
	I0917 02:31:34.421299    3950 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:31:34.421308    3950 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:31:34.421340    3950 start.go:340] cluster config:
	{Name:offline-docker-152000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:31:34.425196    3950 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:34.431049    3950 out.go:177] * Starting "offline-docker-152000" primary control-plane node in "offline-docker-152000" cluster
	I0917 02:31:34.435182    3950 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:31:34.435218    3950 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:31:34.435224    3950 cache.go:56] Caching tarball of preloaded images
	I0917 02:31:34.435297    3950 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:31:34.435302    3950 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:31:34.435363    3950 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/offline-docker-152000/config.json ...
	I0917 02:31:34.435376    3950 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/offline-docker-152000/config.json: {Name:mk414c2243d9d0677a780f970adf348582624ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:31:34.435667    3950 start.go:360] acquireMachinesLock for offline-docker-152000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:31:34.435701    3950 start.go:364] duration metric: took 26.5µs to acquireMachinesLock for "offline-docker-152000"
	I0917 02:31:34.435711    3950 start.go:93] Provisioning new machine with config: &{Name:offline-docker-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:31:34.435736    3950 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:31:34.444080    3950 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:31:34.460265    3950 start.go:159] libmachine.API.Create for "offline-docker-152000" (driver="qemu2")
	I0917 02:31:34.460304    3950 client.go:168] LocalClient.Create starting
	I0917 02:31:34.460393    3950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:31:34.460436    3950 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:34.460445    3950 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:34.460492    3950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:31:34.460516    3950 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:34.460524    3950 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:34.460879    3950 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:31:34.617788    3950 main.go:141] libmachine: Creating SSH key...
	I0917 02:31:34.698364    3950 main.go:141] libmachine: Creating Disk image...
	I0917 02:31:34.698373    3950 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:31:34.698578    3950 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2
	I0917 02:31:34.804447    3950 main.go:141] libmachine: STDOUT: 
	I0917 02:31:34.804482    3950 main.go:141] libmachine: STDERR: 
	I0917 02:31:34.804590    3950 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2 +20000M
	I0917 02:31:34.818454    3950 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:31:34.818479    3950 main.go:141] libmachine: STDERR: 
	I0917 02:31:34.818517    3950 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2
	I0917 02:31:34.818526    3950 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:31:34.818548    3950 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:31:34.818595    3950 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:f7:e5:9e:2e:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2
	I0917 02:31:34.820937    3950 main.go:141] libmachine: STDOUT: 
	I0917 02:31:34.820959    3950 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:31:34.820994    3950 client.go:171] duration metric: took 360.685167ms to LocalClient.Create
	I0917 02:31:36.823062    3950 start.go:128] duration metric: took 2.3873295s to createHost
	I0917 02:31:36.823079    3950 start.go:83] releasing machines lock for "offline-docker-152000", held for 2.387385333s
	W0917 02:31:36.823096    3950 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:36.836295    3950 out.go:177] * Deleting "offline-docker-152000" in qemu2 ...
	W0917 02:31:36.853502    3950 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:36.853512    3950 start.go:729] Will try again in 5 seconds ...
	I0917 02:31:41.855601    3950 start.go:360] acquireMachinesLock for offline-docker-152000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:31:41.855719    3950 start.go:364] duration metric: took 98.375µs to acquireMachinesLock for "offline-docker-152000"
	I0917 02:31:41.855747    3950 start.go:93] Provisioning new machine with config: &{Name:offline-docker-152000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-152000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:31:41.855817    3950 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:31:41.866151    3950 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:31:41.882354    3950 start.go:159] libmachine.API.Create for "offline-docker-152000" (driver="qemu2")
	I0917 02:31:41.882390    3950 client.go:168] LocalClient.Create starting
	I0917 02:31:41.882453    3950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:31:41.882488    3950 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:41.882497    3950 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:41.882541    3950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:31:41.882564    3950 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:41.882569    3950 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:41.882906    3950 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:31:42.075493    3950 main.go:141] libmachine: Creating SSH key...
	I0917 02:31:42.237673    3950 main.go:141] libmachine: Creating Disk image...
	I0917 02:31:42.237686    3950 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:31:42.237955    3950 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2
	I0917 02:31:42.249241    3950 main.go:141] libmachine: STDOUT: 
	I0917 02:31:42.249264    3950 main.go:141] libmachine: STDERR: 
	I0917 02:31:42.249359    3950 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2 +20000M
	I0917 02:31:42.258784    3950 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:31:42.258804    3950 main.go:141] libmachine: STDERR: 
	I0917 02:31:42.258819    3950 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2
	I0917 02:31:42.258825    3950 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:31:42.258833    3950 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:31:42.258880    3950 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:1d:21:3b:7d:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/offline-docker-152000/disk.qcow2
	I0917 02:31:42.260921    3950 main.go:141] libmachine: STDOUT: 
	I0917 02:31:42.260940    3950 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:31:42.260960    3950 client.go:171] duration metric: took 378.566083ms to LocalClient.Create
	I0917 02:31:44.263166    3950 start.go:128] duration metric: took 2.407323125s to createHost
	I0917 02:31:44.263365    3950 start.go:83] releasing machines lock for "offline-docker-152000", held for 2.407574458s
	W0917 02:31:44.263748    3950 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-152000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:44.273263    3950 out.go:201] 
	W0917 02:31:44.277347    3950 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:31:44.277395    3950 out.go:270] * 
	* 
	W0917 02:31:44.280058    3950 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:31:44.288306    3950 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-152000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-17 02:31:44.303134 -0700 PDT m=+3259.588041793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-152000 -n offline-docker-152000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-152000 -n offline-docker-152000: exit status 7 (68.091875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-152000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-152000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-152000
--- FAIL: TestOffline (10.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.898083ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-2rpt2" [d249abf4-4b5e-493f-9c58-7f33d9b1ff7c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.021937333s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gr45x" [408375c4-f267-45e0-b73e-2342a51e46e4] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008717167s
addons_test.go:342: (dbg) Run:  kubectl --context addons-401000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-401000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-401000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.049837333s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-401000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 ip
2024/09/17 01:51:08 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-401000 -n addons-401000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-459000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |                     |
	|         | -p download-only-459000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| delete  | -p download-only-459000              | download-only-459000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| start   | -o=json --download-only              | download-only-406000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |                     |
	|         | -p download-only-406000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| delete  | -p download-only-406000              | download-only-406000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| delete  | -p download-only-459000              | download-only-459000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| delete  | -p download-only-406000              | download-only-406000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| start   | --download-only -p                   | binary-mirror-969000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |                     |
	|         | binary-mirror-969000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49311               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-969000              | binary-mirror-969000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| addons  | disable dashboard -p                 | addons-401000        | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |                     |
	|         | addons-401000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-401000        | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |                     |
	|         | addons-401000                        |                      |         |         |                     |                     |
	| start   | -p addons-401000 --wait=true         | addons-401000        | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:41 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-401000 addons disable         | addons-401000        | jenkins | v1.34.0 | 17 Sep 24 01:41 PDT | 17 Sep 24 01:41 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-401000 addons                 | addons-401000        | jenkins | v1.34.0 | 17 Sep 24 01:50 PDT | 17 Sep 24 01:50 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-401000 addons                 | addons-401000        | jenkins | v1.34.0 | 17 Sep 24 01:50 PDT | 17 Sep 24 01:50 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-401000 addons disable         | addons-401000        | jenkins | v1.34.0 | 17 Sep 24 01:50 PDT | 17 Sep 24 01:51 PDT |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-401000        | jenkins | v1.34.0 | 17 Sep 24 01:51 PDT | 17 Sep 24 01:51 PDT |
	|         | -p addons-401000                     |                      |         |         |                     |                     |
	| ip      | addons-401000 ip                     | addons-401000        | jenkins | v1.34.0 | 17 Sep 24 01:51 PDT | 17 Sep 24 01:51 PDT |
	| addons  | addons-401000 addons disable         | addons-401000        | jenkins | v1.34.0 | 17 Sep 24 01:51 PDT | 17 Sep 24 01:51 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 01:37:54
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:37:54.939914    1634 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:37:54.940042    1634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:54.940046    1634 out.go:358] Setting ErrFile to fd 2...
	I0917 01:37:54.940049    1634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:54.940167    1634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 01:37:54.941204    1634 out.go:352] Setting JSON to false
	I0917 01:37:54.958063    1634 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":444,"bootTime":1726561830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 01:37:54.958130    1634 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:37:54.963195    1634 out.go:177] * [addons-401000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 01:37:54.970169    1634 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 01:37:54.970215    1634 notify.go:220] Checking for updates...
	I0917 01:37:54.977053    1634 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 01:37:54.982114    1634 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 01:37:54.985176    1634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:37:54.988070    1634 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 01:37:54.991131    1634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:37:54.994260    1634 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:37:54.998029    1634 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 01:37:55.005098    1634 start.go:297] selected driver: qemu2
	I0917 01:37:55.005104    1634 start.go:901] validating driver "qemu2" against <nil>
	I0917 01:37:55.005111    1634 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:37:55.007428    1634 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 01:37:55.010073    1634 out.go:177] * Automatically selected the socket_vmnet network
	I0917 01:37:55.013217    1634 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:37:55.013232    1634 cni.go:84] Creating CNI manager for ""
	I0917 01:37:55.013253    1634 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 01:37:55.013262    1634 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 01:37:55.013291    1634 start.go:340] cluster config:
	{Name:addons-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:37:55.017103    1634 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:37:55.026085    1634 out.go:177] * Starting "addons-401000" primary control-plane node in "addons-401000" cluster
	I0917 01:37:55.030101    1634 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 01:37:55.030116    1634 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 01:37:55.030123    1634 cache.go:56] Caching tarball of preloaded images
	I0917 01:37:55.030183    1634 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 01:37:55.030189    1634 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 01:37:55.030390    1634 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/config.json ...
	I0917 01:37:55.030402    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/config.json: {Name:mk139a920867ff9c91d0580383c88259c79d3908 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:37:55.030775    1634 start.go:360] acquireMachinesLock for addons-401000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 01:37:55.030850    1634 start.go:364] duration metric: took 69.041µs to acquireMachinesLock for "addons-401000"
	I0917 01:37:55.030861    1634 start.go:93] Provisioning new machine with config: &{Name:addons-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 01:37:55.030893    1634 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 01:37:55.039104    1634 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0917 01:37:55.286159    1634 start.go:159] libmachine.API.Create for "addons-401000" (driver="qemu2")
	I0917 01:37:55.286210    1634 client.go:168] LocalClient.Create starting
	I0917 01:37:55.286393    1634 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 01:37:55.394360    1634 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 01:37:55.485227    1634 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 01:37:56.374451    1634 main.go:141] libmachine: Creating SSH key...
	I0917 01:37:56.671557    1634 main.go:141] libmachine: Creating Disk image...
	I0917 01:37:56.671568    1634 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 01:37:56.673434    1634 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/disk.qcow2
	I0917 01:37:56.692943    1634 main.go:141] libmachine: STDOUT: 
	I0917 01:37:56.692981    1634 main.go:141] libmachine: STDERR: 
	I0917 01:37:56.693060    1634 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/disk.qcow2 +20000M
	I0917 01:37:56.701373    1634 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 01:37:56.701388    1634 main.go:141] libmachine: STDERR: 
	I0917 01:37:56.701415    1634 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/disk.qcow2
	I0917 01:37:56.701424    1634 main.go:141] libmachine: Starting QEMU VM...
	I0917 01:37:56.701463    1634 qemu.go:418] Using hvf for hardware acceleration
	I0917 01:37:56.701495    1634 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:d4:93:62:cf:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/disk.qcow2
	I0917 01:37:56.758411    1634 main.go:141] libmachine: STDOUT: 
	I0917 01:37:56.758438    1634 main.go:141] libmachine: STDERR: 
	I0917 01:37:56.758441    1634 main.go:141] libmachine: Attempt 0
	I0917 01:37:56.758455    1634 main.go:141] libmachine: Searching for ae:d4:93:62:cf:30 in /var/db/dhcpd_leases ...
	I0917 01:37:56.758518    1634 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 01:37:56.758538    1634 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66ea8fb4}
	I0917 01:37:58.760715    1634 main.go:141] libmachine: Attempt 1
	I0917 01:37:58.760795    1634 main.go:141] libmachine: Searching for ae:d4:93:62:cf:30 in /var/db/dhcpd_leases ...
	I0917 01:37:58.761146    1634 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 01:37:58.761198    1634 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66ea8fb4}
	I0917 01:38:00.763448    1634 main.go:141] libmachine: Attempt 2
	I0917 01:38:00.763530    1634 main.go:141] libmachine: Searching for ae:d4:93:62:cf:30 in /var/db/dhcpd_leases ...
	I0917 01:38:00.763786    1634 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 01:38:00.763838    1634 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66ea8fb4}
	I0917 01:38:02.766005    1634 main.go:141] libmachine: Attempt 3
	I0917 01:38:02.766033    1634 main.go:141] libmachine: Searching for ae:d4:93:62:cf:30 in /var/db/dhcpd_leases ...
	I0917 01:38:02.766146    1634 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 01:38:02.766173    1634 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66ea8fb4}
	I0917 01:38:04.768222    1634 main.go:141] libmachine: Attempt 4
	I0917 01:38:04.768229    1634 main.go:141] libmachine: Searching for ae:d4:93:62:cf:30 in /var/db/dhcpd_leases ...
	I0917 01:38:04.768347    1634 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 01:38:04.768388    1634 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66ea8fb4}
	I0917 01:38:06.770425    1634 main.go:141] libmachine: Attempt 5
	I0917 01:38:06.770433    1634 main.go:141] libmachine: Searching for ae:d4:93:62:cf:30 in /var/db/dhcpd_leases ...
	I0917 01:38:06.770472    1634 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 01:38:06.770478    1634 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66ea8fb4}
	I0917 01:38:08.772534    1634 main.go:141] libmachine: Attempt 6
	I0917 01:38:08.772552    1634 main.go:141] libmachine: Searching for ae:d4:93:62:cf:30 in /var/db/dhcpd_leases ...
	I0917 01:38:08.772634    1634 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0917 01:38:08.772645    1634 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x66ea8fb4}
	I0917 01:38:10.774758    1634 main.go:141] libmachine: Attempt 7
	I0917 01:38:10.774783    1634 main.go:141] libmachine: Searching for ae:d4:93:62:cf:30 in /var/db/dhcpd_leases ...
	I0917 01:38:10.774901    1634 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0917 01:38:10.774914    1634 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:ae:d4:93:62:cf:30 ID:1,ae:d4:93:62:cf:30 Lease:0x66ea9171}
	I0917 01:38:10.774917    1634 main.go:141] libmachine: Found match: ae:d4:93:62:cf:30
	I0917 01:38:10.774926    1634 main.go:141] libmachine: IP: 192.168.105.2
	I0917 01:38:10.774931    1634 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0917 01:38:12.797246    1634 machine.go:93] provisionDockerMachine start ...
	I0917 01:38:12.797980    1634 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:12.798501    1634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e39190] 0x104e3b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 01:38:12.798520    1634 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:38:12.866335    1634 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 01:38:12.866365    1634 buildroot.go:166] provisioning hostname "addons-401000"
	I0917 01:38:12.866491    1634 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:12.866714    1634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e39190] 0x104e3b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 01:38:12.866728    1634 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-401000 && echo "addons-401000" | sudo tee /etc/hostname
	I0917 01:38:12.927165    1634 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-401000
	
	I0917 01:38:12.927244    1634 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:12.927403    1634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e39190] 0x104e3b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 01:38:12.927414    1634 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-401000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-401000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-401000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:38:12.979460    1634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:38:12.979477    1634 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1056/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1056/.minikube}
	I0917 01:38:12.979498    1634 buildroot.go:174] setting up certificates
	I0917 01:38:12.979503    1634 provision.go:84] configureAuth start
	I0917 01:38:12.979507    1634 provision.go:143] copyHostCerts
	I0917 01:38:12.979631    1634 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.pem (1082 bytes)
	I0917 01:38:12.979866    1634 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/cert.pem (1123 bytes)
	I0917 01:38:12.980014    1634 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/key.pem (1675 bytes)
	I0917 01:38:12.980131    1634 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem org=jenkins.addons-401000 san=[127.0.0.1 192.168.105.2 addons-401000 localhost minikube]
	I0917 01:38:13.076037    1634 provision.go:177] copyRemoteCerts
	I0917 01:38:13.076845    1634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:38:13.076861    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:13.104695    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 01:38:13.113131    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 01:38:13.121498    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 01:38:13.129707    1634 provision.go:87] duration metric: took 150.186375ms to configureAuth
	I0917 01:38:13.129716    1634 buildroot.go:189] setting minikube options for container-runtime
	I0917 01:38:13.129831    1634 config.go:182] Loaded profile config "addons-401000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 01:38:13.129872    1634 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:13.129958    1634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e39190] 0x104e3b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 01:38:13.129963    1634 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 01:38:13.172908    1634 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 01:38:13.172915    1634 buildroot.go:70] root file system type: tmpfs
	I0917 01:38:13.172968    1634 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 01:38:13.173015    1634 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:13.173114    1634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e39190] 0x104e3b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 01:38:13.173147    1634 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 01:38:13.221235    1634 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 01:38:13.221287    1634 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:13.221387    1634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e39190] 0x104e3b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 01:38:13.221396    1634 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 01:38:14.605188    1634 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 01:38:14.605201    1634 machine.go:96] duration metric: took 1.807922s to provisionDockerMachine
	I0917 01:38:14.605208    1634 client.go:171] duration metric: took 19.31896025s to LocalClient.Create
	I0917 01:38:14.605220    1634 start.go:167] duration metric: took 19.319036041s to libmachine.API.Create "addons-401000"
	I0917 01:38:14.605224    1634 start.go:293] postStartSetup for "addons-401000" (driver="qemu2")
	I0917 01:38:14.605231    1634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:38:14.605310    1634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:38:14.605319    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:14.630022    1634 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:38:14.631748    1634 info.go:137] Remote host: Buildroot 2023.02.9
	I0917 01:38:14.631759    1634 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1056/.minikube/addons for local assets ...
	I0917 01:38:14.631853    1634 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1056/.minikube/files for local assets ...
	I0917 01:38:14.631896    1634 start.go:296] duration metric: took 26.667542ms for postStartSetup
	I0917 01:38:14.632341    1634 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/config.json ...
	I0917 01:38:14.632550    1634 start.go:128] duration metric: took 19.601619625s to createHost
	I0917 01:38:14.632585    1634 main.go:141] libmachine: Using SSH client type: native
	I0917 01:38:14.632686    1634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e39190] 0x104e3b9d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0917 01:38:14.632691    1634 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 01:38:14.677232    1634 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726562294.976438836
	
	I0917 01:38:14.677243    1634 fix.go:216] guest clock: 1726562294.976438836
	I0917 01:38:14.677247    1634 fix.go:229] Guest: 2024-09-17 01:38:14.976438836 -0700 PDT Remote: 2024-09-17 01:38:14.632553 -0700 PDT m=+19.711477126 (delta=343.885836ms)
	I0917 01:38:14.677266    1634 fix.go:200] guest clock delta is within tolerance: 343.885836ms
	I0917 01:38:14.677268    1634 start.go:83] releasing machines lock for "addons-401000", held for 19.646379708s
	I0917 01:38:14.677593    1634 ssh_runner.go:195] Run: cat /version.json
	I0917 01:38:14.677605    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:14.677737    1634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:38:14.677757    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:14.704355    1634 ssh_runner.go:195] Run: systemctl --version
	I0917 01:38:14.793112    1634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 01:38:14.795843    1634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 01:38:14.795894    1634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:38:14.803560    1634 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 01:38:14.803585    1634 start.go:495] detecting cgroup driver to use...
	I0917 01:38:14.803730    1634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:38:14.811543    1634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 01:38:14.815699    1634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 01:38:14.819710    1634 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 01:38:14.819737    1634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 01:38:14.823429    1634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 01:38:14.827096    1634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 01:38:14.830905    1634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 01:38:14.834931    1634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:38:14.839039    1634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 01:38:14.842984    1634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 01:38:14.846846    1634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 01:38:14.850917    1634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:38:14.854684    1634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:38:14.858360    1634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:14.950967    1634 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 01:38:14.962147    1634 start.go:495] detecting cgroup driver to use...
	I0917 01:38:14.962234    1634 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 01:38:14.973114    1634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:38:14.978324    1634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:38:14.985511    1634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:38:14.990779    1634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 01:38:14.996499    1634 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 01:38:15.035023    1634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 01:38:15.042176    1634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:38:15.048743    1634 ssh_runner.go:195] Run: which cri-dockerd
	I0917 01:38:15.049976    1634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 01:38:15.053214    1634 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 01:38:15.059089    1634 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 01:38:15.142349    1634 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 01:38:15.226216    1634 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 01:38:15.226272    1634 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 01:38:15.232651    1634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:15.323000    1634 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 01:38:17.503713    1634 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.180689208s)
	I0917 01:38:17.503781    1634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 01:38:17.509500    1634 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 01:38:17.516086    1634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 01:38:17.521455    1634 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 01:38:17.605980    1634 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 01:38:17.690360    1634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:17.771838    1634 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 01:38:17.778488    1634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 01:38:17.783950    1634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:17.866457    1634 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 01:38:17.891746    1634 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 01:38:17.892585    1634 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 01:38:17.894930    1634 start.go:563] Will wait 60s for crictl version
	I0917 01:38:17.894977    1634 ssh_runner.go:195] Run: which crictl
	I0917 01:38:17.896442    1634 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:38:17.915304    1634 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 01:38:17.915384    1634 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 01:38:17.927389    1634 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 01:38:17.941886    1634 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 01:38:17.942119    1634 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0917 01:38:17.943632    1634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:38:17.947781    1634 kubeadm.go:883] updating cluster {Name:addons-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 01:38:17.947827    1634 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 01:38:17.947878    1634 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 01:38:17.953003    1634 docker.go:685] Got preloaded images: 
	I0917 01:38:17.953011    1634 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0917 01:38:17.953059    1634 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 01:38:17.956503    1634 ssh_runner.go:195] Run: which lz4
	I0917 01:38:17.958037    1634 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 01:38:17.959533    1634 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 01:38:17.959542    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0917 01:38:19.210725    1634 docker.go:649] duration metric: took 1.252738458s to copy over tarball
	I0917 01:38:19.210785    1634 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 01:38:20.149813    1634 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 01:38:20.164494    1634 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 01:38:20.167958    1634 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0917 01:38:20.174047    1634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:20.262607    1634 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 01:38:22.979022    1634 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.716394625s)
	I0917 01:38:22.979130    1634 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 01:38:22.985376    1634 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 01:38:22.985384    1634 cache_images.go:84] Images are preloaded, skipping loading
	I0917 01:38:22.985390    1634 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0917 01:38:22.985449    1634 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-401000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 01:38:22.985516    1634 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 01:38:23.013174    1634 cni.go:84] Creating CNI manager for ""
	I0917 01:38:23.013190    1634 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 01:38:23.013214    1634 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 01:38:23.013225    1634 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-401000 NodeName:addons-401000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:38:23.013301    1634 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-401000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:38:23.013374    1634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 01:38:23.017397    1634 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:38:23.017431    1634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:38:23.020791    1634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0917 01:38:23.026931    1634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:38:23.033085    1634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0917 01:38:23.040083    1634 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0917 01:38:23.041513    1634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:38:23.045393    1634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:23.123554    1634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:38:23.132816    1634 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000 for IP: 192.168.105.2
	I0917 01:38:23.132837    1634 certs.go:194] generating shared ca certs ...
	I0917 01:38:23.132858    1634 certs.go:226] acquiring lock for ca certs: {Name:mkff5fc329c6145be4c1381e1b58175b65aa8cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:23.133328    1634 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.key
	I0917 01:38:23.292015    1634 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt ...
	I0917 01:38:23.292026    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt: {Name:mk7ae385fd40848b34f09cbed899633401a8f8b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:23.292353    1634 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.key ...
	I0917 01:38:23.292358    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.key: {Name:mk15e4b072fe6cd91ccbf87418163e7b34b90757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:23.292519    1634 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.key
	I0917 01:38:23.455987    1634 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.crt ...
	I0917 01:38:23.455997    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.crt: {Name:mka1359dca503891616d72206fe9048f273a4b80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:23.456432    1634 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.key ...
	I0917 01:38:23.456438    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.key: {Name:mkb234df1faa27798c8b2a8e767dbe91d892db34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:23.456621    1634 certs.go:256] generating profile certs ...
	I0917 01:38:23.456659    1634 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.key
	I0917 01:38:23.456667    1634 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt with IP's: []
	I0917 01:38:23.514128    1634 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt ...
	I0917 01:38:23.514132    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: {Name:mkfcdf6a71e116f606f9fc6f82161c500887a104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:23.514287    1634 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.key ...
	I0917 01:38:23.514292    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.key: {Name:mk0fdd11db44e3f9d77ab1689b74ba68915d5ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:23.514442    1634 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.key.20c806b7
	I0917 01:38:23.514453    1634 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.crt.20c806b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0917 01:38:23.590294    1634 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.crt.20c806b7 ...
	I0917 01:38:23.590298    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.crt.20c806b7: {Name:mk3f92b810703c0365d0d98b955199c5e2055319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:23.590486    1634 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.key.20c806b7 ...
	I0917 01:38:23.590492    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.key.20c806b7: {Name:mkd9616e643fd283b006140765f9d1ca8c5d3812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:23.590657    1634 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.crt.20c806b7 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.crt
	I0917 01:38:23.590801    1634 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.key.20c806b7 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.key
	I0917 01:38:23.590924    1634 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/proxy-client.key
	I0917 01:38:23.590938    1634 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/proxy-client.crt with IP's: []
	I0917 01:38:23.646396    1634 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/proxy-client.crt ...
	I0917 01:38:23.646400    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/proxy-client.crt: {Name:mkdd6351cbea2a5fdc6cd3e0562e547876bdba26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:23.646552    1634 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/proxy-client.key ...
	I0917 01:38:23.646558    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/proxy-client.key: {Name:mk3c191c8fadbb20e705eff4ddd0fd96a0984bf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:23.646839    1634 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 01:38:23.646876    1634 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem (1082 bytes)
	I0917 01:38:23.646910    1634 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:38:23.646942    1634 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem (1675 bytes)
	I0917 01:38:23.647377    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:38:23.656397    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 01:38:23.664585    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:38:23.672978    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:38:23.681117    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 01:38:23.689373    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 01:38:23.697578    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:38:23.706261    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 01:38:23.714737    1634 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:38:23.723163    1634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:38:23.730953    1634 ssh_runner.go:195] Run: openssl version
	I0917 01:38:23.733379    1634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:38:23.737348    1634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:38:23.738998    1634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:38:23.739023    1634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:38:23.741125    1634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:38:23.744819    1634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 01:38:23.746233    1634 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 01:38:23.746275    1634 kubeadm.go:392] StartCluster: {Name:addons-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:38:23.746351    1634 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 01:38:23.751704    1634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 01:38:23.755351    1634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 01:38:23.759069    1634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 01:38:23.766222    1634 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 01:38:23.766231    1634 kubeadm.go:157] found existing configuration files:
	
	I0917 01:38:23.766277    1634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 01:38:23.769784    1634 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 01:38:23.769818    1634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 01:38:23.773209    1634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 01:38:23.776502    1634 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 01:38:23.776549    1634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 01:38:23.780300    1634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 01:38:23.783980    1634 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 01:38:23.784020    1634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 01:38:23.787590    1634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 01:38:23.791210    1634 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 01:38:23.791255    1634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 01:38:23.794707    1634 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 01:38:23.816640    1634 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 01:38:23.816669    1634 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 01:38:23.852961    1634 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 01:38:23.853017    1634 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 01:38:23.853061    1634 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 01:38:23.858218    1634 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 01:38:23.865389    1634 out.go:235]   - Generating certificates and keys ...
	I0917 01:38:23.865423    1634 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 01:38:23.865458    1634 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 01:38:23.940225    1634 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 01:38:24.122330    1634 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 01:38:24.187909    1634 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 01:38:24.230516    1634 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 01:38:24.329370    1634 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 01:38:24.329451    1634 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-401000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0917 01:38:24.453895    1634 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 01:38:24.453970    1634 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-401000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0917 01:38:24.511112    1634 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 01:38:24.616530    1634 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 01:38:24.741853    1634 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 01:38:24.741891    1634 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 01:38:24.814536    1634 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 01:38:25.004067    1634 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 01:38:25.086238    1634 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 01:38:25.256368    1634 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 01:38:25.328328    1634 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 01:38:25.328513    1634 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 01:38:25.329698    1634 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 01:38:25.337580    1634 out.go:235]   - Booting up control plane ...
	I0917 01:38:25.337644    1634 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 01:38:25.337685    1634 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 01:38:25.337732    1634 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 01:38:25.339870    1634 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 01:38:25.342384    1634 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 01:38:25.342412    1634 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 01:38:25.438041    1634 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 01:38:25.438105    1634 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 01:38:25.946102    1634 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 507.57825ms
	I0917 01:38:25.946344    1634 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 01:38:28.946886    1634 kubeadm.go:310] [api-check] The API server is healthy after 3.001135752s
	I0917 01:38:28.959676    1634 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 01:38:28.970697    1634 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 01:38:28.982223    1634 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 01:38:28.982394    1634 kubeadm.go:310] [mark-control-plane] Marking the node addons-401000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 01:38:28.986996    1634 kubeadm.go:310] [bootstrap-token] Using token: 6lpdqj.p3xjxy7g59p0mdn8
	I0917 01:38:28.999812    1634 out.go:235]   - Configuring RBAC rules ...
	I0917 01:38:28.999883    1634 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 01:38:28.999931    1634 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 01:38:29.001952    1634 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 01:38:29.003004    1634 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 01:38:29.004158    1634 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 01:38:29.005582    1634 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 01:38:29.356249    1634 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 01:38:29.759786    1634 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 01:38:30.355643    1634 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 01:38:30.356979    1634 kubeadm.go:310] 
	I0917 01:38:30.357098    1634 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 01:38:30.357120    1634 kubeadm.go:310] 
	I0917 01:38:30.357270    1634 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 01:38:30.357292    1634 kubeadm.go:310] 
	I0917 01:38:30.357384    1634 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 01:38:30.357499    1634 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 01:38:30.357630    1634 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 01:38:30.357645    1634 kubeadm.go:310] 
	I0917 01:38:30.357748    1634 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 01:38:30.357760    1634 kubeadm.go:310] 
	I0917 01:38:30.357884    1634 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 01:38:30.357895    1634 kubeadm.go:310] 
	I0917 01:38:30.357971    1634 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 01:38:30.358192    1634 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 01:38:30.358345    1634 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 01:38:30.358354    1634 kubeadm.go:310] 
	I0917 01:38:30.358500    1634 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 01:38:30.358620    1634 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 01:38:30.358659    1634 kubeadm.go:310] 
	I0917 01:38:30.358798    1634 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6lpdqj.p3xjxy7g59p0mdn8 \
	I0917 01:38:30.358981    1634 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3105cdadd1e1eaa420c61face26906cf5212dd9c9efeb8ef9725bc0a50fd268d \
	I0917 01:38:30.359053    1634 kubeadm.go:310] 	--control-plane 
	I0917 01:38:30.359065    1634 kubeadm.go:310] 
	I0917 01:38:30.359224    1634 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 01:38:30.359233    1634 kubeadm.go:310] 
	I0917 01:38:30.359351    1634 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6lpdqj.p3xjxy7g59p0mdn8 \
	I0917 01:38:30.359528    1634 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3105cdadd1e1eaa420c61face26906cf5212dd9c9efeb8ef9725bc0a50fd268d 
	I0917 01:38:30.360056    1634 kubeadm.go:310] W0917 08:38:24.113916    1597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 01:38:30.360607    1634 kubeadm.go:310] W0917 08:38:24.114543    1597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 01:38:30.360779    1634 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 01:38:30.360801    1634 cni.go:84] Creating CNI manager for ""
	I0917 01:38:30.360821    1634 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 01:38:30.364561    1634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 01:38:30.375511    1634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 01:38:30.383353    1634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 01:38:30.394842    1634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 01:38:30.394933    1634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:30.394959    1634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-401000 minikube.k8s.io/updated_at=2024_09_17T01_38_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=addons-401000 minikube.k8s.io/primary=true
	I0917 01:38:30.456899    1634 ops.go:34] apiserver oom_adj: -16
	I0917 01:38:30.457077    1634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:30.958136    1634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:31.459321    1634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:31.959201    1634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:32.458216    1634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:32.959214    1634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:33.459333    1634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:33.959217    1634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:34.459185    1634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 01:38:34.504411    1634 kubeadm.go:1113] duration metric: took 4.109553959s to wait for elevateKubeSystemPrivileges
	I0917 01:38:34.504425    1634 kubeadm.go:394] duration metric: took 10.758135625s to StartCluster
	I0917 01:38:34.504435    1634 settings.go:142] acquiring lock: {Name:mk2d861f3b7e502753ec34b4d96136a66d57e5dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.504614    1634 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 01:38:34.504897    1634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/kubeconfig: {Name:mkb79e559d17024b096623143f764244ebf5b237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:38:34.505141    1634 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 01:38:34.505152    1634 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 01:38:34.505191    1634 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 01:38:34.505242    1634 addons.go:69] Setting yakd=true in profile "addons-401000"
	I0917 01:38:34.505250    1634 addons.go:234] Setting addon yakd=true in "addons-401000"
	I0917 01:38:34.505263    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.505278    1634 config.go:182] Loaded profile config "addons-401000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 01:38:34.505309    1634 addons.go:69] Setting inspektor-gadget=true in profile "addons-401000"
	I0917 01:38:34.505314    1634 addons.go:69] Setting volcano=true in profile "addons-401000"
	I0917 01:38:34.505311    1634 addons.go:69] Setting storage-provisioner=true in profile "addons-401000"
	I0917 01:38:34.505318    1634 addons.go:234] Setting addon inspektor-gadget=true in "addons-401000"
	I0917 01:38:34.505319    1634 addons.go:234] Setting addon volcano=true in "addons-401000"
	I0917 01:38:34.505321    1634 addons.go:234] Setting addon storage-provisioner=true in "addons-401000"
	I0917 01:38:34.505328    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.505335    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.505338    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.505376    1634 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-401000"
	I0917 01:38:34.505388    1634 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-401000"
	I0917 01:38:34.505396    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.505429    1634 addons.go:69] Setting default-storageclass=true in profile "addons-401000"
	I0917 01:38:34.505460    1634 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-401000"
	I0917 01:38:34.505610    1634 addons.go:69] Setting gcp-auth=true in profile "addons-401000"
	I0917 01:38:34.505612    1634 addons.go:69] Setting registry=true in profile "addons-401000"
	I0917 01:38:34.505617    1634 addons.go:234] Setting addon registry=true in "addons-401000"
	I0917 01:38:34.505617    1634 mustload.go:65] Loading cluster: addons-401000
	I0917 01:38:34.505624    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.505646    1634 retry.go:31] will retry after 510.157776ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.505655    1634 addons.go:69] Setting metrics-server=true in profile "addons-401000"
	I0917 01:38:34.505659    1634 addons.go:234] Setting addon metrics-server=true in "addons-401000"
	I0917 01:38:34.505665    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.505688    1634 config.go:182] Loaded profile config "addons-401000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 01:38:34.505693    1634 addons.go:69] Setting volumesnapshots=true in profile "addons-401000"
	I0917 01:38:34.505697    1634 addons.go:234] Setting addon volumesnapshots=true in "addons-401000"
	I0917 01:38:34.505704    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.505803    1634 retry.go:31] will retry after 1.194074298s: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.505688    1634 retry.go:31] will retry after 1.253233414s: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.505810    1634 addons.go:69] Setting ingress=true in profile "addons-401000"
	I0917 01:38:34.505814    1634 addons.go:234] Setting addon ingress=true in "addons-401000"
	I0917 01:38:34.505820    1634 retry.go:31] will retry after 823.24978ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.505824    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.505827    1634 addons.go:69] Setting cloud-spanner=true in profile "addons-401000"
	I0917 01:38:34.505830    1634 addons.go:234] Setting addon cloud-spanner=true in "addons-401000"
	I0917 01:38:34.505837    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.505869    1634 retry.go:31] will retry after 1.009176137s: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.505915    1634 retry.go:31] will retry after 1.468634096s: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.505921    1634 addons.go:69] Setting ingress-dns=true in profile "addons-401000"
	I0917 01:38:34.505952    1634 retry.go:31] will retry after 574.45518ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.505947    1634 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-401000"
	I0917 01:38:34.506002    1634 retry.go:31] will retry after 1.032644417s: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.506007    1634 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-401000"
	I0917 01:38:34.506032    1634 retry.go:31] will retry after 1.314444919s: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.506032    1634 retry.go:31] will retry after 1.051701228s: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.505971    1634 addons.go:234] Setting addon ingress-dns=true in "addons-401000"
	I0917 01:38:34.506045    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.506056    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:34.505920    1634 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-401000"
	I0917 01:38:34.506082    1634 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-401000"
	I0917 01:38:34.506202    1634 retry.go:31] will retry after 1.459307177s: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.506236    1634 retry.go:31] will retry after 1.122101506s: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.506255    1634 retry.go:31] will retry after 895.351224ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/monitor: connect: connection refused
	I0917 01:38:34.508739    1634 out.go:177] * Verifying Kubernetes components...
	I0917 01:38:34.516618    1634 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 01:38:34.521864    1634 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 01:38:34.521872    1634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:38:34.525622    1634 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 01:38:34.525629    1634 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 01:38:34.525637    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:34.529717    1634 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 01:38:34.529727    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 01:38:34.529733    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:34.555968    1634 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 01:38:34.631904    1634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:38:34.660404    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 01:38:34.678774    1634 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 01:38:34.678790    1634 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 01:38:34.706561    1634 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 01:38:34.706574    1634 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 01:38:34.719442    1634 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 01:38:34.719454    1634 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 01:38:34.740245    1634 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 01:38:34.740258    1634 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 01:38:34.768694    1634 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 01:38:34.768711    1634 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 01:38:34.777164    1634 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0917 01:38:34.778639    1634 node_ready.go:35] waiting up to 6m0s for node "addons-401000" to be "Ready" ...
	I0917 01:38:34.783585    1634 node_ready.go:49] node "addons-401000" has status "Ready":"True"
	I0917 01:38:34.783603    1634 node_ready.go:38] duration metric: took 4.943458ms for node "addons-401000" to be "Ready" ...
	I0917 01:38:34.783607    1634 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 01:38:34.787643    1634 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-401000" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:34.788741    1634 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 01:38:34.788752    1634 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 01:38:34.798552    1634 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 01:38:34.798562    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 01:38:34.809648    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 01:38:35.021696    1634 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 01:38:35.024712    1634 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 01:38:35.024721    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 01:38:35.024731    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:35.052891    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 01:38:35.083408    1634 addons.go:234] Setting addon default-storageclass=true in "addons-401000"
	I0917 01:38:35.083428    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:35.084012    1634 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 01:38:35.084020    1634 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 01:38:35.084026    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:35.128043    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 01:38:35.281876    1634 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-401000" context rescaled to 1 replicas
	I0917 01:38:35.334686    1634 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 01:38:35.341646    1634 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 01:38:35.345695    1634 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 01:38:35.345704    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 01:38:35.345714    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:35.381319    1634 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 01:38:35.381331    1634 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 01:38:35.388776    1634 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 01:38:35.388786    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 01:38:35.396770    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 01:38:35.408862    1634 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 01:38:35.418650    1634 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 01:38:35.428714    1634 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 01:38:35.434702    1634 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 01:38:35.440642    1634 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 01:38:35.448275    1634 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 01:38:35.457718    1634 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 01:38:35.467743    1634 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 01:38:35.470648    1634 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 01:38:35.470661    1634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 01:38:35.470673    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:35.507520    1634 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 01:38:35.507531    1634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 01:38:35.519736    1634 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 01:38:35.523784    1634 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 01:38:35.523796    1634 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 01:38:35.523808    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:35.524084    1634 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 01:38:35.524093    1634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 01:38:35.543712    1634 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0917 01:38:35.553685    1634 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0917 01:38:35.557340    1634 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 01:38:35.557349    1634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 01:38:35.560714    1634 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0917 01:38:35.563590    1634 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 01:38:35.567144    1634 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 01:38:35.567151    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0917 01:38:35.567162    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:35.569920    1634 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 01:38:35.569927    1634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 01:38:35.572927    1634 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 01:38:35.572936    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 01:38:35.572944    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:35.579505    1634 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 01:38:35.579517    1634 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 01:38:35.596060    1634 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 01:38:35.596070    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 01:38:35.609358    1634 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 01:38:35.609371    1634 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 01:38:35.617253    1634 addons.go:475] Verifying addon registry=true in "addons-401000"
	I0917 01:38:35.620255    1634 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 01:38:35.620263    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 01:38:35.621764    1634 out.go:177] * Verifying registry addon...
	I0917 01:38:35.622446    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 01:38:35.626118    1634 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 01:38:35.632748    1634 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 01:38:35.635736    1634 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 01:38:35.635744    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 01:38:35.635754    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:35.636026    1634 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 01:38:35.636031    1634 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 01:38:35.636279    1634 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 01:38:35.636285    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:35.640972    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 01:38:35.660330    1634 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 01:38:35.660344    1634 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 01:38:35.671428    1634 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 01:38:35.671438    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 01:38:35.701063    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:35.722918    1634 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 01:38:35.722928    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 01:38:35.722990    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 01:38:35.778620    1634 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 01:38:35.778637    1634 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 01:38:35.778688    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 01:38:35.833541    1634 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 01:38:35.837710    1634 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 01:38:35.840690    1634 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 01:38:35.840700    1634 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 01:38:35.840709    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:35.838063    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 01:38:35.847060    1634 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 01:38:35.852691    1634 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 01:38:35.855825    1634 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 01:38:35.855835    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 01:38:35.855846    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:35.968080    1634 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-401000"
	I0917 01:38:35.968103    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:36.010788    1634 out.go:177]   - Using image docker.io/busybox:stable
	I0917 01:38:36.014765    1634 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 01:38:36.021755    1634 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 01:38:36.021755    1634 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 01:38:36.021849    1634 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 01:38:36.021866    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:36.025254    1634 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 01:38:36.025265    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 01:38:36.025274    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:36.130475    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:36.149148    1634 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 01:38:36.149162    1634 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 01:38:36.155815    1634 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 01:38:36.155829    1634 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 01:38:36.217994    1634 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 01:38:36.218007    1634 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 01:38:36.266009    1634 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 01:38:36.266027    1634 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 01:38:36.281178    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 01:38:36.301865    1634 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 01:38:36.301878    1634 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 01:38:36.342995    1634 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 01:38:36.343008    1634 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 01:38:36.366167    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 01:38:36.405131    1634 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 01:38:36.405144    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 01:38:36.408293    1634 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 01:38:36.408302    1634 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 01:38:36.470368    1634 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 01:38:36.470377    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 01:38:36.578056    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 01:38:36.601302    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 01:38:36.629703    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:36.792407    1634 pod_ready.go:103] pod "etcd-addons-401000" in "kube-system" namespace has status "Ready":"False"
	I0917 01:38:37.149955    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:37.629616    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:38.192121    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:38.656348    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:38.834654    1634 pod_ready.go:103] pod "etcd-addons-401000" in "kube-system" namespace has status "Ready":"False"
	I0917 01:38:39.134624    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:39.271095    1634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.6300985s)
	I0917 01:38:39.271119    1634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.548113916s)
	I0917 01:38:39.271129    1634 addons.go:475] Verifying addon metrics-server=true in "addons-401000"
	I0917 01:38:39.271145    1634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.492443416s)
	I0917 01:38:39.271237    1634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.648777458s)
	I0917 01:38:39.538679    1634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.2574785s)
	I0917 01:38:39.538696    1634 addons.go:475] Verifying addon ingress=true in "addons-401000"
	I0917 01:38:39.538707    1634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.69795625s)
	I0917 01:38:39.538716    1634 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-401000"
	I0917 01:38:39.538764    1634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.172576167s)
	I0917 01:38:39.538860    1634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.960785791s)
	W0917 01:38:39.538871    1634 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 01:38:39.538883    1634 retry.go:31] will retry after 135.507977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 01:38:39.538885    1634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.937564875s)
	I0917 01:38:39.542660    1634 out.go:177] * Verifying ingress addon...
	I0917 01:38:39.552673    1634 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 01:38:39.552673    1634 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-401000 service yakd-dashboard -n yakd-dashboard
	
	I0917 01:38:39.561688    1634 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 01:38:39.563986    1634 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 01:38:39.571546    1634 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 01:38:39.572416    1634 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 01:38:39.572422    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:39.674610    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 01:38:39.682063    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:39.793728    1634 pod_ready.go:93] pod "etcd-addons-401000" in "kube-system" namespace has status "Ready":"True"
	I0917 01:38:39.793738    1634 pod_ready.go:82] duration metric: took 5.006070458s for pod "etcd-addons-401000" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:39.793742    1634 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-401000" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:40.069762    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:40.169856    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:40.566291    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:40.628838    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:41.071695    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:41.178008    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:41.567980    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:41.631017    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:41.799102    1634 pod_ready.go:103] pod "kube-apiserver-addons-401000" in "kube-system" namespace has status "Ready":"False"
	I0917 01:38:42.066088    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:42.167496    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:42.566514    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:42.630098    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:43.066200    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:43.168195    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:43.514108    1634 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 01:38:43.514125    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:43.541255    1634 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 01:38:43.548292    1634 addons.go:234] Setting addon gcp-auth=true in "addons-401000"
	I0917 01:38:43.548313    1634 host.go:66] Checking if "addons-401000" exists ...
	I0917 01:38:43.549017    1634 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 01:38:43.549024    1634 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/addons-401000/id_rsa Username:docker}
	I0917 01:38:43.565910    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:43.576697    1634 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 01:38:43.582659    1634 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 01:38:43.588711    1634 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 01:38:43.588718    1634 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 01:38:43.594981    1634 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 01:38:43.594988    1634 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 01:38:43.601485    1634 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 01:38:43.601491    1634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 01:38:43.610266    1634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 01:38:43.629992    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:43.875761    1634 addons.go:475] Verifying addon gcp-auth=true in "addons-401000"
	I0917 01:38:43.881925    1634 out.go:177] * Verifying gcp-auth addon...
	I0917 01:38:43.889162    1634 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 01:38:43.890349    1634 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 01:38:44.266469    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:44.266710    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:44.298380    1634 pod_ready.go:93] pod "kube-apiserver-addons-401000" in "kube-system" namespace has status "Ready":"True"
	I0917 01:38:44.298388    1634 pod_ready.go:82] duration metric: took 4.50463525s for pod "kube-apiserver-addons-401000" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:44.298393    1634 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-401000" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:44.300406    1634 pod_ready.go:93] pod "kube-controller-manager-addons-401000" in "kube-system" namespace has status "Ready":"True"
	I0917 01:38:44.300412    1634 pod_ready.go:82] duration metric: took 2.016083ms for pod "kube-controller-manager-addons-401000" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:44.300417    1634 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-401000" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:44.566387    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:44.629918    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:44.805181    1634 pod_ready.go:93] pod "kube-scheduler-addons-401000" in "kube-system" namespace has status "Ready":"True"
	I0917 01:38:44.805190    1634 pod_ready.go:82] duration metric: took 504.769292ms for pod "kube-scheduler-addons-401000" in "kube-system" namespace to be "Ready" ...
	I0917 01:38:44.805194    1634 pod_ready.go:39] duration metric: took 10.021562625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 01:38:44.805204    1634 api_server.go:52] waiting for apiserver process to appear ...
	I0917 01:38:44.805273    1634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 01:38:44.811702    1634 api_server.go:72] duration metric: took 10.306521459s to wait for apiserver process to appear ...
	I0917 01:38:44.811711    1634 api_server.go:88] waiting for apiserver healthz status ...
	I0917 01:38:44.811718    1634 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0917 01:38:44.814421    1634 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0917 01:38:44.815008    1634 api_server.go:141] control plane version: v1.31.1
	I0917 01:38:44.815019    1634 api_server.go:131] duration metric: took 3.305667ms to wait for apiserver health ...
	I0917 01:38:44.815022    1634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 01:38:44.819951    1634 system_pods.go:59] 17 kube-system pods found
	I0917 01:38:44.819960    1634 system_pods.go:61] "coredns-7c65d6cfc9-x9mm5" [9e357140-c4b3-4916-a393-59bc703e83fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 01:38:44.819963    1634 system_pods.go:61] "csi-hostpath-attacher-0" [11ce4f2a-3afc-4776-9437-f114015733ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 01:38:44.819967    1634 system_pods.go:61] "csi-hostpath-resizer-0" [91ccaa85-dce6-46c1-b8c9-d7623f464fdb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 01:38:44.819969    1634 system_pods.go:61] "csi-hostpathplugin-bkvd8" [5d95032a-42c3-46f9-8ad8-f9b3e6d7695b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 01:38:44.819974    1634 system_pods.go:61] "etcd-addons-401000" [98505331-65a5-4d04-94c5-8593d542c352] Running
	I0917 01:38:44.819976    1634 system_pods.go:61] "kube-apiserver-addons-401000" [12b0122a-1086-4866-a54e-7aa00c75563d] Running
	I0917 01:38:44.819978    1634 system_pods.go:61] "kube-controller-manager-addons-401000" [55e4791a-706f-4645-8a75-7e492a5ea647] Running
	I0917 01:38:44.819981    1634 system_pods.go:61] "kube-ingress-dns-minikube" [bdf599b7-b34d-47ef-a968-d06c1d749332] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 01:38:44.819982    1634 system_pods.go:61] "kube-proxy-h79nx" [573d58fa-dc0e-49f0-b340-82f98e7a5496] Running
	I0917 01:38:44.819984    1634 system_pods.go:61] "kube-scheduler-addons-401000" [0cd15600-c8a5-415c-b426-c695f84b2f03] Running
	I0917 01:38:44.819986    1634 system_pods.go:61] "metrics-server-84c5f94fbc-9sskp" [8c247fc7-c700-4404-b3a6-e2031d9cc335] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 01:38:44.819989    1634 system_pods.go:61] "nvidia-device-plugin-daemonset-6qb27" [caa6a2fb-5902-4ec2-95de-42e47a1db59c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0917 01:38:44.819991    1634 system_pods.go:61] "registry-66c9cd494c-2rpt2" [d249abf4-4b5e-493f-9c58-7f33d9b1ff7c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 01:38:44.819994    1634 system_pods.go:61] "registry-proxy-gr45x" [408375c4-f267-45e0-b73e-2342a51e46e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 01:38:44.819996    1634 system_pods.go:61] "snapshot-controller-56fcc65765-7bgs2" [7e54a8bb-a570-48d6-b032-c5cd7aa7bd10] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 01:38:44.819999    1634 system_pods.go:61] "snapshot-controller-56fcc65765-w8qdb" [28966af5-cb6c-4658-8b7f-ce15d6f80963] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 01:38:44.820001    1634 system_pods.go:61] "storage-provisioner" [321a93c6-86e8-4898-993c-d37c07fe2a46] Running
	I0917 01:38:44.820003    1634 system_pods.go:74] duration metric: took 4.978583ms to wait for pod list to return data ...
	I0917 01:38:44.820006    1634 default_sa.go:34] waiting for default service account to be created ...
	I0917 01:38:44.821006    1634 default_sa.go:45] found service account: "default"
	I0917 01:38:44.821018    1634 default_sa.go:55] duration metric: took 1.003417ms for default service account to be created ...
	I0917 01:38:44.821020    1634 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 01:38:44.825427    1634 system_pods.go:86] 17 kube-system pods found
	I0917 01:38:44.825435    1634 system_pods.go:89] "coredns-7c65d6cfc9-x9mm5" [9e357140-c4b3-4916-a393-59bc703e83fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 01:38:44.825439    1634 system_pods.go:89] "csi-hostpath-attacher-0" [11ce4f2a-3afc-4776-9437-f114015733ce] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 01:38:44.825442    1634 system_pods.go:89] "csi-hostpath-resizer-0" [91ccaa85-dce6-46c1-b8c9-d7623f464fdb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 01:38:44.825446    1634 system_pods.go:89] "csi-hostpathplugin-bkvd8" [5d95032a-42c3-46f9-8ad8-f9b3e6d7695b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 01:38:44.825448    1634 system_pods.go:89] "etcd-addons-401000" [98505331-65a5-4d04-94c5-8593d542c352] Running
	I0917 01:38:44.825450    1634 system_pods.go:89] "kube-apiserver-addons-401000" [12b0122a-1086-4866-a54e-7aa00c75563d] Running
	I0917 01:38:44.825452    1634 system_pods.go:89] "kube-controller-manager-addons-401000" [55e4791a-706f-4645-8a75-7e492a5ea647] Running
	I0917 01:38:44.825455    1634 system_pods.go:89] "kube-ingress-dns-minikube" [bdf599b7-b34d-47ef-a968-d06c1d749332] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 01:38:44.825456    1634 system_pods.go:89] "kube-proxy-h79nx" [573d58fa-dc0e-49f0-b340-82f98e7a5496] Running
	I0917 01:38:44.825458    1634 system_pods.go:89] "kube-scheduler-addons-401000" [0cd15600-c8a5-415c-b426-c695f84b2f03] Running
	I0917 01:38:44.825461    1634 system_pods.go:89] "metrics-server-84c5f94fbc-9sskp" [8c247fc7-c700-4404-b3a6-e2031d9cc335] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 01:38:44.825463    1634 system_pods.go:89] "nvidia-device-plugin-daemonset-6qb27" [caa6a2fb-5902-4ec2-95de-42e47a1db59c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0917 01:38:44.825467    1634 system_pods.go:89] "registry-66c9cd494c-2rpt2" [d249abf4-4b5e-493f-9c58-7f33d9b1ff7c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 01:38:44.825469    1634 system_pods.go:89] "registry-proxy-gr45x" [408375c4-f267-45e0-b73e-2342a51e46e4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 01:38:44.825472    1634 system_pods.go:89] "snapshot-controller-56fcc65765-7bgs2" [7e54a8bb-a570-48d6-b032-c5cd7aa7bd10] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 01:38:44.825475    1634 system_pods.go:89] "snapshot-controller-56fcc65765-w8qdb" [28966af5-cb6c-4658-8b7f-ce15d6f80963] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 01:38:44.825476    1634 system_pods.go:89] "storage-provisioner" [321a93c6-86e8-4898-993c-d37c07fe2a46] Running
	I0917 01:38:44.825479    1634 system_pods.go:126] duration metric: took 4.455875ms to wait for k8s-apps to be running ...
	I0917 01:38:44.825482    1634 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 01:38:44.825528    1634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 01:38:44.831483    1634 system_svc.go:56] duration metric: took 5.996125ms WaitForService to wait for kubelet
	I0917 01:38:44.831496    1634 kubeadm.go:582] duration metric: took 10.326317125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:38:44.831508    1634 node_conditions.go:102] verifying NodePressure condition ...
	I0917 01:38:44.832916    1634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 01:38:44.832923    1634 node_conditions.go:123] node cpu capacity is 2
	I0917 01:38:44.832929    1634 node_conditions.go:105] duration metric: took 1.417875ms to run NodePressure ...
	I0917 01:38:44.832934    1634 start.go:241] waiting for startup goroutines ...
	I0917 01:38:45.066571    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:45.130206    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:45.566092    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:45.631972    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:46.066578    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:46.205425    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:46.566057    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:46.629719    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:47.068055    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:47.168077    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:47.568921    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:47.630332    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:48.066158    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:48.188294    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:48.565940    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:48.629983    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:49.068270    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:49.129735    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:49.567302    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:49.630552    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:50.066294    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:50.129787    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:50.566209    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:50.630074    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:51.066487    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:51.130093    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:51.566341    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:51.630063    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:52.066355    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:52.129968    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 01:38:52.566883    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:52.667968    1634 kapi.go:107] duration metric: took 17.041817833s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 01:38:53.067388    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:53.566477    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:54.066210    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:54.566462    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:55.066373    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:55.566419    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:56.069935    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:56.565957    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:57.066130    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:57.566627    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:58.066364    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:58.566480    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:59.066739    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:38:59.566230    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:00.066743    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:00.566317    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:01.066790    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:01.566300    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:02.066304    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:02.566115    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:03.066376    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:03.565728    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:04.066375    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:04.569743    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:05.067285    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:05.567396    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:06.066633    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:06.566876    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:07.066488    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:07.566078    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:08.065865    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:08.566590    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:09.066874    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:09.566274    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:10.067030    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:10.566101    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:11.066461    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:11.571097    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:12.066458    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:12.566549    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:13.066527    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:13.568524    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:14.066156    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:14.566185    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:15.066048    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:15.566203    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:16.068236    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:16.573434    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:17.066103    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:17.566172    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:18.074224    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:18.567411    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:19.066099    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:19.566584    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:20.067720    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:20.566695    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:21.066302    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:21.566587    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:22.067219    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:22.578094    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:23.066051    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:23.566234    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:24.067572    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:24.567300    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:25.066481    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:25.566496    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:26.066396    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:26.567056    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:27.065947    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:27.566206    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:28.101168    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:28.566179    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:29.066221    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:29.566345    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:30.066908    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:30.566465    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:31.066243    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:31.567243    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:32.066373    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:32.566093    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:33.066355    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:33.566328    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:34.066378    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:34.567535    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:35.066579    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:35.566696    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:36.066523    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 01:39:36.571723    1634 kapi.go:107] duration metric: took 57.007615667s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 01:40:01.565624    1634 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 01:40:01.565634    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:02.065737    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:02.567428    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:03.068880    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:03.566985    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:04.066242    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:04.575559    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:05.067217    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:05.571342    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:06.065250    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:06.393098    1634 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 01:40:06.393109    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:06.564956    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:06.894530    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:07.066534    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:07.399484    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:07.572026    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:07.894420    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:08.068097    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:08.393807    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:08.565908    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:08.894850    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:09.064825    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:09.395333    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:09.571068    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:09.895289    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:10.065601    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:10.393347    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:10.566172    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:10.893788    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:11.065901    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:11.394242    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:11.567046    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:11.894944    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:12.066686    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:12.395576    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:12.569850    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:12.894260    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:13.066719    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:13.393226    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:13.565798    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:13.893046    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:14.066065    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:14.398395    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:14.567896    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:14.895206    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:15.067774    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:15.396316    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:15.568398    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:15.894440    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:16.066442    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:16.393649    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:16.566097    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:16.892896    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:17.065521    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:17.393651    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:17.566225    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:17.893972    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:18.065355    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:18.393063    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:18.566112    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:18.893860    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:19.068208    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:19.393900    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:19.567668    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:19.897729    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:20.065800    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:20.393063    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:20.565938    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:20.894337    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:21.067113    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:21.397887    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:21.570181    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:21.897208    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:22.068905    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:22.394801    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:22.567409    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:22.896613    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:23.067945    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:23.392699    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:23.570334    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:23.896890    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:24.068423    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:24.397765    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:24.570070    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:24.896783    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:25.069648    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:25.395140    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:25.572062    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:25.893966    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:26.065944    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:26.392656    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:26.566355    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:26.894505    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:27.066005    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:27.393889    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:27.566833    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:27.893535    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:28.066567    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:28.394002    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:28.569616    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:28.896328    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:29.068263    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:29.395946    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:29.569686    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:29.896309    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:30.065686    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:30.393040    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:30.565731    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:30.897525    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:31.070152    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:31.397940    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:31.571731    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:31.900000    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:32.067939    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:32.392812    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:32.566894    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:32.897669    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:33.071873    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:33.393859    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:33.567021    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:33.896269    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:34.066937    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:34.393941    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:34.567448    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:34.894069    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:35.070237    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:35.395763    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:35.569371    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:35.896003    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:36.066691    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:36.393483    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:36.566011    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:36.896862    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:37.067604    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:37.394712    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:37.571344    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:37.894092    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:38.066615    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:38.394777    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:38.569287    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:38.897173    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:39.072207    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:39.395641    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:39.572078    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:39.896619    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:40.065903    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:40.393057    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:40.566151    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:40.895021    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:41.070478    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:41.394043    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:41.566439    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:41.897297    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:42.067347    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:42.394081    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:42.565168    1634 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 01:40:42.565178    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:42.893905    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:43.066008    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:43.393076    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:43.565962    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:43.893007    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:44.065761    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:44.392861    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:44.565104    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:44.892962    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:45.066084    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:45.392922    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:45.566521    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:45.893857    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:46.066442    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:46.391873    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:46.566149    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:46.894367    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:47.067043    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:47.393178    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:47.566630    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:47.894000    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:48.066169    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:48.393703    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:48.566075    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:48.895102    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:49.069378    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:49.393869    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:49.566521    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:49.893787    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:50.066319    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:50.393387    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:50.566375    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:50.893786    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:51.066983    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:51.393658    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:51.567286    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:51.898789    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:52.068054    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:52.397546    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:52.568074    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:52.895717    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:53.071495    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:53.392878    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:53.572202    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:53.897236    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:54.067675    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:54.393435    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:54.566300    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:54.894768    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:55.069770    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:55.393534    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:55.566216    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:55.897284    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:56.069339    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:56.396540    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:56.566601    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:56.898536    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:57.068846    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:57.397649    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:57.568067    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:57.893914    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:58.066230    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:58.394641    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:58.570367    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:58.897747    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:59.073827    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:59.394069    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:40:59.567544    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:40:59.894108    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:00.066299    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:00.402799    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:00.566832    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:00.894455    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:01.066949    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:01.393770    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:01.570633    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:01.894594    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:02.068321    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:02.398434    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:02.574052    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:02.894946    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:03.069084    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:03.395923    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:03.574370    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:03.896987    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:04.067359    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:04.394071    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:04.575530    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:04.894591    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:05.068685    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:05.392985    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:05.566377    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:05.893301    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:06.066162    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:06.394514    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:06.566033    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:06.893777    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:07.066718    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:07.393255    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:07.566077    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:07.892928    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:08.066175    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:08.393408    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:08.565946    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:08.893239    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:09.065952    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:09.393414    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:09.565963    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:09.893450    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:10.066229    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:10.392974    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:10.566155    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:10.896145    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:11.066180    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:11.393143    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:11.566254    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:11.893236    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:12.066199    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:12.391336    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:12.566181    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:12.893468    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:13.066116    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:13.392983    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:13.566328    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:13.893954    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:14.066180    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:14.393722    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:14.566852    1634 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 01:41:14.893238    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:15.064291    1634 kapi.go:107] duration metric: took 2m35.50233125s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 01:41:15.406484    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:15.893319    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:16.393278    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:16.893067    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:17.469154    1634 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 01:41:17.893145    1634 kapi.go:107] duration metric: took 2m34.003715458s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 01:41:17.897899    1634 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-401000 cluster.
	I0917 01:41:17.903879    1634 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 01:41:17.909697    1634 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 01:41:17.914826    1634 out.go:177] * Enabled addons: storage-provisioner, inspektor-gadget, nvidia-device-plugin, default-storageclass, cloud-spanner, metrics-server, ingress-dns, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0917 01:41:17.918875    1634 addons.go:510] duration metric: took 2m43.413400583s for enable addons: enabled=[storage-provisioner inspektor-gadget nvidia-device-plugin default-storageclass cloud-spanner metrics-server ingress-dns volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0917 01:41:17.918892    1634 start.go:246] waiting for cluster config update ...
	I0917 01:41:17.918902    1634 start.go:255] writing updated cluster config ...
	I0917 01:41:17.920082    1634 ssh_runner.go:195] Run: rm -f paused
	I0917 01:41:18.076396    1634 start.go:600] kubectl: 1.30.2, cluster: 1.31.1 (minor skew: 1)
	I0917 01:41:18.079928    1634 out.go:177] * Done! kubectl is now configured to use "addons-401000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 08:51:07 addons-401000 dockerd[1288]: time="2024-09-17T08:51:07.830426694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 08:51:07 addons-401000 dockerd[1288]: time="2024-09-17T08:51:07.830441028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 08:51:07 addons-401000 dockerd[1288]: time="2024-09-17T08:51:07.830518112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 08:51:07 addons-401000 cri-dockerd[1184]: time="2024-09-17T08:51:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2104c20b48e72801cbee478613b0b2c54002025f6bc4c8e31d4646ec7fdfc46f/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 08:51:08 addons-401000 dockerd[1282]: time="2024-09-17T08:51:08.099817918Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 17 08:51:08 addons-401000 dockerd[1282]: time="2024-09-17T08:51:08.719854066Z" level=info msg="ignoring event" container=e712e5137dd0fa164563d49eee8490f9d081ab45e50857313dab91e2488dd1af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.720094694Z" level=info msg="shim disconnected" id=e712e5137dd0fa164563d49eee8490f9d081ab45e50857313dab91e2488dd1af namespace=moby
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.720153861Z" level=warning msg="cleaning up after shim disconnected" id=e712e5137dd0fa164563d49eee8490f9d081ab45e50857313dab91e2488dd1af namespace=moby
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.720159070Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:51:08 addons-401000 dockerd[1282]: time="2024-09-17T08:51:08.838116921Z" level=info msg="ignoring event" container=f6ea46e82c4864707361c827a5dded946491248afa64cb47c3f1862dd9098d3f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.837923127Z" level=info msg="shim disconnected" id=f6ea46e82c4864707361c827a5dded946491248afa64cb47c3f1862dd9098d3f namespace=moby
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.838211255Z" level=warning msg="cleaning up after shim disconnected" id=f6ea46e82c4864707361c827a5dded946491248afa64cb47c3f1862dd9098d3f namespace=moby
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.838216047Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.901951488Z" level=info msg="shim disconnected" id=4011b72ac88a1ba07621feb784978a102dd578aff4f0324160b78ea654d37baf namespace=moby
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.902103074Z" level=warning msg="cleaning up after shim disconnected" id=4011b72ac88a1ba07621feb784978a102dd578aff4f0324160b78ea654d37baf namespace=moby
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.902120699Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:51:08 addons-401000 dockerd[1282]: time="2024-09-17T08:51:08.902163449Z" level=info msg="ignoring event" container=4011b72ac88a1ba07621feb784978a102dd578aff4f0324160b78ea654d37baf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:08 addons-401000 dockerd[1282]: time="2024-09-17T08:51:08.955337929Z" level=info msg="ignoring event" container=83cd1750c3d280fb3ca5fdcf6c3fae3c49018dd62d9420799caed12278eff87b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.955255137Z" level=info msg="shim disconnected" id=83cd1750c3d280fb3ca5fdcf6c3fae3c49018dd62d9420799caed12278eff87b namespace=moby
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.955419055Z" level=warning msg="cleaning up after shim disconnected" id=83cd1750c3d280fb3ca5fdcf6c3fae3c49018dd62d9420799caed12278eff87b namespace=moby
	Sep 17 08:51:08 addons-401000 dockerd[1288]: time="2024-09-17T08:51:08.955423305Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:51:09 addons-401000 dockerd[1282]: time="2024-09-17T08:51:09.009143544Z" level=info msg="ignoring event" container=1e38e435ce2c35a53b5112fe48d361e2dad8ff774bf6daf70c71b38bda08e1ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:51:09 addons-401000 dockerd[1288]: time="2024-09-17T08:51:09.009373839Z" level=info msg="shim disconnected" id=1e38e435ce2c35a53b5112fe48d361e2dad8ff774bf6daf70c71b38bda08e1ba namespace=moby
	Sep 17 08:51:09 addons-401000 dockerd[1288]: time="2024-09-17T08:51:09.009433798Z" level=warning msg="cleaning up after shim disconnected" id=1e38e435ce2c35a53b5112fe48d361e2dad8ff774bf6daf70c71b38bda08e1ba namespace=moby
	Sep 17 08:51:09 addons-401000 dockerd[1288]: time="2024-09-17T08:51:09.009449632Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0e61b109c1dff       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            58 seconds ago      Exited              gadget                    7                   72c4218a12ed2       gadget-xzdj9
	eabdb830e852c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   adf040a9e726d       gcp-auth-89d5ffd79-nxc6w
	5b86a0f2a0536       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             9 minutes ago       Running             controller                0                   dacbebd2f1df3       ingress-nginx-controller-bc57996ff-skgsj
	09d8892ce0fb4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              patch                     0                   2e6d70289400d       ingress-nginx-admission-patch-f6fcc
	7a5979eb14908       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   10 minutes ago      Exited              create                    0                   baae2189fbd1a       ingress-nginx-admission-create-ddk54
	178a92d2ed948       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       1                   89aa4aff5f120       storage-provisioner
	5740e3767ed4a       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   96ad02a46f2e7       local-path-provisioner-86d989889c-2krhb
	1fbcffc402948       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server            0                   259453afffdfb       metrics-server-84c5f94fbc-9sskp
	3e0f70b66c089       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   260d770fa8820       kube-ingress-dns-minikube
	beb9e1a550de6       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator    0                   d5288ce08f93a       cloud-spanner-emulator-769b77f747-tvrcb
	4011b72ac88a1       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy            0                   1e38e435ce2c3       registry-proxy-gr45x
	f6ea46e82c486       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                  0                   83cd1750c3d28       registry-66c9cd494c-2rpt2
	3ceaa29e7bd32       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                   0                   6c4bedfba160b       coredns-7c65d6cfc9-x9mm5
	93df16cc58dc5       ba04bb24b9575                                                                                                                12 minutes ago      Exited              storage-provisioner       0                   89aa4aff5f120       storage-provisioner
	c895576aadd04       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                0                   9c891349df257       kube-proxy-h79nx
	c40c6d4cc36be       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   929616484fccb       kube-controller-manager-addons-401000
	2b0441985257f       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                      0                   c85b212f31e93       etcd-addons-401000
	1eb6ea34d6e5a       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler            0                   efdbd0ae5bfd4       kube-scheduler-addons-401000
	10399d8ccab0d       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver            0                   5ebfb5a52022f       kube-apiserver-addons-401000
	
	
	==> controller_ingress [5b86a0f2a053] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0917 08:41:14.001479       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0917 08:41:14.001599       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0917 08:41:14.004888       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0917 08:41:14.052961       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0917 08:41:14.058422       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0917 08:41:14.063295       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0917 08:41:14.068811       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a0b70e22-d4fd-4855-85fc-e8f78530a7df", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0917 08:41:14.068835       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"47e8b8c0-99d3-4d3b-b37d-12505fc2e178", APIVersion:"v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0917 08:41:14.068839       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"f90a0ed2-412f-41c0-a6b7-553358e0934c", APIVersion:"v1", ResourceVersion:"720", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0917 08:41:15.266255       7 nginx.go:317] "Starting NGINX process"
	I0917 08:41:15.266258       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0917 08:41:15.266426       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0917 08:41:15.267504       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0917 08:41:15.273760       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0917 08:41:15.273955       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-skgsj"
	I0917 08:41:15.276630       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-skgsj" node="addons-401000"
	I0917 08:41:15.284168       7 controller.go:213] "Backend successfully reloaded"
	I0917 08:41:15.284244       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0917 08:41:15.284406       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-skgsj", UID:"f9db84b0-20e2-461a-8993-9ad567534477", APIVersion:"v1", ResourceVersion:"1219", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [3ceaa29e7bd3] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[791158399]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 08:38:36.818) (total time: 30000ms):
	Trace[791158399]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (08:39:06.819)
	Trace[791158399]: [30.000338759s] [30.000338759s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 10.244.0.5:53602 - 30509 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00021638s
	[INFO] 10.244.0.5:53602 - 56866 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000094723s
	[INFO] 10.244.0.5:38929 - 46836 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000091388s
	[INFO] 10.244.0.5:38929 - 31990 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000176689s
	[INFO] 10.244.0.5:43332 - 34578 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049196s
	[INFO] 10.244.0.5:43332 - 53013 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000192615s
	[INFO] 10.244.0.5:45115 - 3335 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003673s
	[INFO] 10.244.0.5:45115 - 4614 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000077672s
	[INFO] 10.244.0.5:60091 - 54540 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000022888s
	[INFO] 10.244.0.5:60091 - 58895 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00003377s
	[INFO] 10.244.0.24:53541 - 32492 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000136892s
	[INFO] 10.244.0.24:55501 - 9055 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000238247s
	[INFO] 10.244.0.24:45023 - 1818 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000045367s
	[INFO] 10.244.0.24:51254 - 30298 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000026495s
	[INFO] 10.244.0.24:46291 - 40283 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043159s
	[INFO] 10.244.0.24:54153 - 45687 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079235s
	[INFO] 10.244.0.24:49930 - 22580 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000746196s
	[INFO] 10.244.0.24:39981 - 21147 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000923163s
	
	
	==> describe nodes <==
	Name:               addons-401000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-401000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=addons-401000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T01_38_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-401000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 08:38:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-401000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 08:51:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 08:47:11 +0000   Tue, 17 Sep 2024 08:38:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 08:47:11 +0000   Tue, 17 Sep 2024 08:38:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 08:47:11 +0000   Tue, 17 Sep 2024 08:38:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 08:47:11 +0000   Tue, 17 Sep 2024 08:38:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-401000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d3341b2da3a4a9ab9965476c3c3af60
	  System UUID:                6d3341b2da3a4a9ab9965476c3c3af60
	  Boot ID:                    703825ff-596f-4b59-ad0e-d88773bca408
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-tvrcb                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     registry-test                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gadget                      gadget-xzdj9                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-nxc6w                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-skgsj                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-x9mm5                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-401000                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-401000                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-401000                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-h79nx                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-401000                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-9sskp                               100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  local-path-storage          local-path-provisioner-86d989889c-2krhb                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-401000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-401000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-401000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-401000 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-401000 event: Registered Node addons-401000 in Controller
	
	
	==> dmesg <==
	[  +6.279682] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 08:39] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.810167] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.725187] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.274292] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.067815] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.065129] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.721240] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 08:40] kauditd_printk_skb: 21 callbacks suppressed
	[ +29.647045] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.526337] kauditd_printk_skb: 76 callbacks suppressed
	[  +5.201722] kauditd_printk_skb: 15 callbacks suppressed
	[Sep17 08:41] kauditd_printk_skb: 22 callbacks suppressed
	[ +11.376949] kauditd_printk_skb: 6 callbacks suppressed
	[ +15.314029] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.140906] kauditd_printk_skb: 20 callbacks suppressed
	[Sep17 08:42] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 08:45] kauditd_printk_skb: 10 callbacks suppressed
	[Sep17 08:50] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.381193] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.534545] kauditd_printk_skb: 7 callbacks suppressed
	[ +20.291611] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.588111] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.213430] kauditd_printk_skb: 6 callbacks suppressed
	[Sep17 08:51] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [2b0441985257] <==
	{"level":"info","ts":"2024-09-17T08:38:27.160511Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T08:38:27.160535Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T08:38:27.160010Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T08:38:27.160576Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T08:38:27.161074Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T08:38:27.161771Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T08:38:27.182855Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T08:38:27.193835Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"warn","ts":"2024-09-17T08:38:44.360582Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.503826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f5fb49328fd884\" ","response":"range_response_count:1 size:928"}
	{"level":"info","ts":"2024-09-17T08:38:44.360611Z","caller":"traceutil/trace.go:171","msg":"trace[751122703] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f5fb49328fd884; range_end:; response_count:1; response_revision:907; }","duration":"238.541994ms","start":"2024-09-17T08:38:44.122061Z","end":"2024-09-17T08:38:44.360603Z","steps":["trace[751122703] 'range keys from in-memory index tree'  (duration: 238.441346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:38:44.360707Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.065217ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:38:44.360717Z","caller":"traceutil/trace.go:171","msg":"trace[171810102] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:907; }","duration":"196.075753ms","start":"2024-09-17T08:38:44.164638Z","end":"2024-09-17T08:38:44.360713Z","steps":["trace[171810102] 'range keys from in-memory index tree'  (duration: 196.048695ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:38:44.360752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.036149ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:38:44.360758Z","caller":"traceutil/trace.go:171","msg":"trace[403824564] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:907; }","duration":"196.042422ms","start":"2024-09-17T08:38:44.164713Z","end":"2024-09-17T08:38:44.360756Z","steps":["trace[403824564] 'range keys from in-memory index tree'  (duration: 196.026761ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:38:44.360787Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.696351ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:38:44.360794Z","caller":"traceutil/trace.go:171","msg":"trace[2043251312] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:907; }","duration":"132.703484ms","start":"2024-09-17T08:38:44.228089Z","end":"2024-09-17T08:38:44.360792Z","steps":["trace[2043251312] 'range keys from in-memory index tree'  (duration: 132.657281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:38:44.360823Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.812665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:38:44.360829Z","caller":"traceutil/trace.go:171","msg":"trace[1297750015] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:907; }","duration":"112.818896ms","start":"2024-09-17T08:38:44.248009Z","end":"2024-09-17T08:38:44.360827Z","steps":["trace[1297750015] 'range keys from in-memory index tree'  (duration: 112.788764ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:38:45.377185Z","caller":"traceutil/trace.go:171","msg":"trace[513672578] transaction","detail":"{read_only:false; response_revision:917; number_of_response:1; }","duration":"121.659541ms","start":"2024-09-17T08:38:45.255516Z","end":"2024-09-17T08:38:45.377175Z","steps":["trace[513672578] 'process raft request'  (duration: 121.603716ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T08:39:08.026730Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.586004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T08:39:08.026765Z","caller":"traceutil/trace.go:171","msg":"trace[1667991422] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:999; }","duration":"164.637695ms","start":"2024-09-17T08:39:07.862120Z","end":"2024-09-17T08:39:08.026757Z","steps":["trace[1667991422] 'range keys from in-memory index tree'  (duration: 164.512506ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:41:12.819927Z","caller":"traceutil/trace.go:171","msg":"trace[240763700] transaction","detail":"{read_only:false; response_revision:1409; number_of_response:1; }","duration":"187.061122ms","start":"2024-09-17T08:41:12.632854Z","end":"2024-09-17T08:41:12.819915Z","steps":["trace[240763700] 'process raft request'  (duration: 186.993759ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:48:27.202945Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1833}
	{"level":"info","ts":"2024-09-17T08:48:27.286333Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1833,"took":"82.955132ms","hash":1398442889,"current-db-size-bytes":8896512,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4739072,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-09-17T08:48:27.286366Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1398442889,"revision":1833,"compact-revision":-1}
	
	
	==> gcp-auth [eabdb830e852] <==
	2024/09/17 08:41:17 GCP Auth Webhook started!
	2024/09/17 08:41:33 Ready to marshal response ...
	2024/09/17 08:41:33 Ready to write response ...
	2024/09/17 08:41:34 Ready to marshal response ...
	2024/09/17 08:41:34 Ready to write response ...
	2024/09/17 08:41:57 Ready to marshal response ...
	2024/09/17 08:41:57 Ready to write response ...
	2024/09/17 08:41:57 Ready to marshal response ...
	2024/09/17 08:41:57 Ready to write response ...
	2024/09/17 08:41:57 Ready to marshal response ...
	2024/09/17 08:41:57 Ready to write response ...
	2024/09/17 08:50:08 Ready to marshal response ...
	2024/09/17 08:50:08 Ready to write response ...
	2024/09/17 08:50:15 Ready to marshal response ...
	2024/09/17 08:50:15 Ready to write response ...
	2024/09/17 08:50:37 Ready to marshal response ...
	2024/09/17 08:50:37 Ready to write response ...
	2024/09/17 08:51:07 Ready to marshal response ...
	2024/09/17 08:51:07 Ready to write response ...
	2024/09/17 08:51:07 Ready to marshal response ...
	2024/09/17 08:51:07 Ready to write response ...
	
	
	==> kernel <==
	 08:51:09 up 13 min,  0 users,  load average: 0.37, 0.54, 0.44
	Linux addons-401000 5.10.207 #1 SMP PREEMPT Sun Sep 15 17:39:25 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [10399d8ccab0] <==
	I0917 08:41:48.033192       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0917 08:41:48.034837       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0917 08:41:48.045958       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0917 08:41:48.104590       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0917 08:41:48.826103       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0917 08:41:49.008252       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0917 08:41:49.035823       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0917 08:41:49.038483       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0917 08:41:49.059318       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0917 08:41:49.105451       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0917 08:41:49.147945       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0917 08:50:22.762510       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 08:50:51.707894       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:50:51.707910       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 08:50:51.716012       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:50:51.716038       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 08:50:51.718339       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:50:51.718355       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 08:50:51.728821       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:50:51.728854       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 08:50:51.740768       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 08:50:51.740784       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 08:50:52.718926       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 08:50:52.740932       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0917 08:50:52.747028       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [c40c6d4cc36b] <==
	W0917 08:50:56.182290       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:50:56.182387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:50:56.473097       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:50:56.473139       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:50:56.683021       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:50:56.683146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:50:57.012915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="3.625µs"
	W0917 08:50:58.535314       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:50:58.535403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:00.945069       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:00.945117       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:01.673720       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:01.673853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:02.561143       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:02.561196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 08:51:03.348892       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:03.348956       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 08:51:05.115238       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0917 08:51:05.115284       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 08:51:05.491587       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0917 08:51:05.491718       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 08:51:07.060085       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0917 08:51:08.820770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="2.167µs"
	W0917 08:51:08.953243       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 08:51:08.953266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [c895576aadd0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 08:38:36.913769       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 08:38:36.920160       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0917 08:38:36.920195       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 08:38:36.957691       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 08:38:36.957716       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 08:38:36.957733       1 server_linux.go:169] "Using iptables Proxier"
	I0917 08:38:36.958437       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 08:38:36.958583       1 server.go:483] "Version info" version="v1.31.1"
	I0917 08:38:36.958589       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 08:38:36.959589       1 config.go:199] "Starting service config controller"
	I0917 08:38:36.959600       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 08:38:36.959612       1 config.go:105] "Starting endpoint slice config controller"
	I0917 08:38:36.959614       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 08:38:36.962505       1 config.go:328] "Starting node config controller"
	I0917 08:38:36.962513       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 08:38:37.062671       1 shared_informer.go:320] Caches are synced for service config
	I0917 08:38:37.062689       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 08:38:37.062671       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1eb6ea34d6e5] <==
	W0917 08:38:27.862828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 08:38:27.862847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:27.862901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:27.862919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:27.862970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 08:38:27.862992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:27.863038       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 08:38:27.863058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:27.863079       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 08:38:27.863094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:27.864272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 08:38:27.864290       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:27.864325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 08:38:27.864339       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:27.864361       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 08:38:27.864369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:27.864390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:27.864395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:27.864432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 08:38:27.864453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:28.809912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 08:38:28.810419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 08:38:28.837089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 08:38:28.837194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0917 08:38:29.058769       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.487973    2048 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd87030-2efd-4ce0-9dbc-740f76cac403" containerName="task-pv-container"
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.487975    2048 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d95032a-42c3-46f9-8ad8-f9b3e6d7695b" containerName="hostpath"
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.530974    2048 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-crkxb\" (UniqueName: \"kubernetes.io/projected/caa6a2fb-5902-4ec2-95de-42e47a1db59c-kube-api-access-crkxb\") on node \"addons-401000\" DevicePath \"\""
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.531074    2048 reconciler_common.go:288] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/caa6a2fb-5902-4ec2-95de-42e47a1db59c-device-plugin\") on node \"addons-401000\" DevicePath \"\""
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.613649    2048 scope.go:117] "RemoveContainer" containerID="459c1cffd01fc93d6a9756f060dcd0f44fdeaa8502a43a887b1881d8dddb80a5"
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.626290    2048 scope.go:117] "RemoveContainer" containerID="459c1cffd01fc93d6a9756f060dcd0f44fdeaa8502a43a887b1881d8dddb80a5"
	Sep 17 08:51:07 addons-401000 kubelet[2048]: E0917 08:51:07.626691    2048 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 459c1cffd01fc93d6a9756f060dcd0f44fdeaa8502a43a887b1881d8dddb80a5" containerID="459c1cffd01fc93d6a9756f060dcd0f44fdeaa8502a43a887b1881d8dddb80a5"
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.626710    2048 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"459c1cffd01fc93d6a9756f060dcd0f44fdeaa8502a43a887b1881d8dddb80a5"} err="failed to get container status \"459c1cffd01fc93d6a9756f060dcd0f44fdeaa8502a43a887b1881d8dddb80a5\": rpc error: code = Unknown desc = Error response from daemon: No such container: 459c1cffd01fc93d6a9756f060dcd0f44fdeaa8502a43a887b1881d8dddb80a5"
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.632328    2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/95e54c31-b368-4d9c-8dc1-1e45e6e67efe-data\") pod \"helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2\" (UID: \"95e54c31-b368-4d9c-8dc1-1e45e6e67efe\") " pod="local-path-storage/helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2"
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.632368    2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/95e54c31-b368-4d9c-8dc1-1e45e6e67efe-script\") pod \"helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2\" (UID: \"95e54c31-b368-4d9c-8dc1-1e45e6e67efe\") " pod="local-path-storage/helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2"
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.632395    2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/95e54c31-b368-4d9c-8dc1-1e45e6e67efe-gcp-creds\") pod \"helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2\" (UID: \"95e54c31-b368-4d9c-8dc1-1e45e6e67efe\") " pod="local-path-storage/helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2"
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.632421    2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74bz\" (UniqueName: \"kubernetes.io/projected/95e54c31-b368-4d9c-8dc1-1e45e6e67efe-kube-api-access-k74bz\") pod \"helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2\" (UID: \"95e54c31-b368-4d9c-8dc1-1e45e6e67efe\") " pod="local-path-storage/helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2"
	Sep 17 08:51:07 addons-401000 kubelet[2048]: I0917 08:51:07.906431    2048 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa6a2fb-5902-4ec2-95de-42e47a1db59c" path="/var/lib/kubelet/pods/caa6a2fb-5902-4ec2-95de-42e47a1db59c/volumes"
	Sep 17 08:51:08 addons-401000 kubelet[2048]: I0917 08:51:08.844517    2048 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r496\" (UniqueName: \"kubernetes.io/projected/fa8b95ab-2ca5-45ce-9144-3a5872d498a9-kube-api-access-6r496\") pod \"fa8b95ab-2ca5-45ce-9144-3a5872d498a9\" (UID: \"fa8b95ab-2ca5-45ce-9144-3a5872d498a9\") "
	Sep 17 08:51:08 addons-401000 kubelet[2048]: I0917 08:51:08.844537    2048 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fa8b95ab-2ca5-45ce-9144-3a5872d498a9-gcp-creds\") pod \"fa8b95ab-2ca5-45ce-9144-3a5872d498a9\" (UID: \"fa8b95ab-2ca5-45ce-9144-3a5872d498a9\") "
	Sep 17 08:51:08 addons-401000 kubelet[2048]: I0917 08:51:08.844575    2048 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa8b95ab-2ca5-45ce-9144-3a5872d498a9-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "fa8b95ab-2ca5-45ce-9144-3a5872d498a9" (UID: "fa8b95ab-2ca5-45ce-9144-3a5872d498a9"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 08:51:08 addons-401000 kubelet[2048]: I0917 08:51:08.845278    2048 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8b95ab-2ca5-45ce-9144-3a5872d498a9-kube-api-access-6r496" (OuterVolumeSpecName: "kube-api-access-6r496") pod "fa8b95ab-2ca5-45ce-9144-3a5872d498a9" (UID: "fa8b95ab-2ca5-45ce-9144-3a5872d498a9"). InnerVolumeSpecName "kube-api-access-6r496". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:51:08 addons-401000 kubelet[2048]: I0917 08:51:08.945167    2048 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fa8b95ab-2ca5-45ce-9144-3a5872d498a9-gcp-creds\") on node \"addons-401000\" DevicePath \"\""
	Sep 17 08:51:08 addons-401000 kubelet[2048]: I0917 08:51:08.945179    2048 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6r496\" (UniqueName: \"kubernetes.io/projected/fa8b95ab-2ca5-45ce-9144-3a5872d498a9-kube-api-access-6r496\") on node \"addons-401000\" DevicePath \"\""
	Sep 17 08:51:09 addons-401000 kubelet[2048]: I0917 08:51:09.045865    2048 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g8nc\" (UniqueName: \"kubernetes.io/projected/408375c4-f267-45e0-b73e-2342a51e46e4-kube-api-access-9g8nc\") pod \"408375c4-f267-45e0-b73e-2342a51e46e4\" (UID: \"408375c4-f267-45e0-b73e-2342a51e46e4\") "
	Sep 17 08:51:09 addons-401000 kubelet[2048]: I0917 08:51:09.045882    2048 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmkv9\" (UniqueName: \"kubernetes.io/projected/d249abf4-4b5e-493f-9c58-7f33d9b1ff7c-kube-api-access-mmkv9\") pod \"d249abf4-4b5e-493f-9c58-7f33d9b1ff7c\" (UID: \"d249abf4-4b5e-493f-9c58-7f33d9b1ff7c\") "
	Sep 17 08:51:09 addons-401000 kubelet[2048]: I0917 08:51:09.047682    2048 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408375c4-f267-45e0-b73e-2342a51e46e4-kube-api-access-9g8nc" (OuterVolumeSpecName: "kube-api-access-9g8nc") pod "408375c4-f267-45e0-b73e-2342a51e46e4" (UID: "408375c4-f267-45e0-b73e-2342a51e46e4"). InnerVolumeSpecName "kube-api-access-9g8nc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:51:09 addons-401000 kubelet[2048]: I0917 08:51:09.047818    2048 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d249abf4-4b5e-493f-9c58-7f33d9b1ff7c-kube-api-access-mmkv9" (OuterVolumeSpecName: "kube-api-access-mmkv9") pod "d249abf4-4b5e-493f-9c58-7f33d9b1ff7c" (UID: "d249abf4-4b5e-493f-9c58-7f33d9b1ff7c"). InnerVolumeSpecName "kube-api-access-mmkv9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:51:09 addons-401000 kubelet[2048]: I0917 08:51:09.146142    2048 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9g8nc\" (UniqueName: \"kubernetes.io/projected/408375c4-f267-45e0-b73e-2342a51e46e4-kube-api-access-9g8nc\") on node \"addons-401000\" DevicePath \"\""
	Sep 17 08:51:09 addons-401000 kubelet[2048]: I0917 08:51:09.146154    2048 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mmkv9\" (UniqueName: \"kubernetes.io/projected/d249abf4-4b5e-493f-9c58-7f33d9b1ff7c-kube-api-access-mmkv9\") on node \"addons-401000\" DevicePath \"\""
	
	
	==> storage-provisioner [178a92d2ed94] <==
	I0917 08:39:08.145635       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 08:39:08.153784       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 08:39:08.153807       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 08:39:08.167946       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 08:39:08.168621       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-401000_37c6fd2b-c922-41d8-9fe5-7ce6b3dffd75!
	I0917 08:39:08.171309       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9eb95f0e-b3f1-4ce9-b050-8deff7353fb6", APIVersion:"v1", ResourceVersion:"1005", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-401000_37c6fd2b-c922-41d8-9fe5-7ce6b3dffd75 became leader
	I0917 08:39:08.273058       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-401000_37c6fd2b-c922-41d8-9fe5-7ce6b3dffd75!
	
	
	==> storage-provisioner [93df16cc58dc] <==
	I0917 08:38:36.838823       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0917 08:39:06.840784       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-401000 -n addons-401000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-401000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path ingress-nginx-admission-create-ddk54 ingress-nginx-admission-patch-f6fcc helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-401000 describe pod busybox test-local-path ingress-nginx-admission-create-ddk54 ingress-nginx-admission-patch-f6fcc helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-401000 describe pod busybox test-local-path ingress-nginx-admission-create-ddk54 ingress-nginx-admission-patch-f6fcc helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2: exit status 1 (51.740458ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-401000/192.168.105.2
	Start Time:       Tue, 17 Sep 2024 01:41:57 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n29t8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-n29t8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m13s                  default-scheduler  Successfully assigned default/busybox to addons-401000
	  Normal   Pulling    7m43s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m42s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m42s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m28s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x21 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xptzr (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-xptzr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ddk54" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-f6fcc" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-401000 describe pod busybox test-local-path ingress-nginx-admission-create-ddk54 ingress-nginx-admission-patch-f6fcc helper-pod-create-pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.42s)

                                                
                                    
x
+
TestCertOptions (10.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-453000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-453000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.858498125s)

                                                
                                                
-- stdout --
	* [cert-options-453000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-453000" primary control-plane node in "cert-options-453000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-453000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-453000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-453000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-453000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-453000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.231291ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-453000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-453000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-453000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-453000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-453000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-453000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.215125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-453000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-453000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-453000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-453000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-453000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-17 02:32:17.283262 -0700 PDT m=+3292.568327001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-453000 -n cert-options-453000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-453000 -n cert-options-453000: exit status 7 (31.173125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-453000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-453000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-453000
--- FAIL: TestCertOptions (10.12s)

                                                
                                    
x
+
TestCertExpiration (195.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-340000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-340000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.977970792s)

                                                
                                                
-- stdout --
	* [cert-expiration-340000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-340000" primary control-plane node in "cert-expiration-340000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-340000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-340000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-340000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-340000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-340000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.218620583s)

                                                
                                                
-- stdout --
	* [cert-expiration-340000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-340000" primary control-plane node in "cert-expiration-340000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-340000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-340000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-340000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-340000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-340000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-340000" primary control-plane node in "cert-expiration-340000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-340000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-340000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-340000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-17 02:35:17.499732 -0700 PDT m=+3472.785656293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-340000 -n cert-expiration-340000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-340000 -n cert-expiration-340000: exit status 7 (57.01525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-340000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-340000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-340000
--- FAIL: TestCertExpiration (195.34s)

                                                
                                    
x
+
TestDockerFlags (10.29s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-296000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-296000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.057533875s)

                                                
                                                
-- stdout --
	* [docker-flags-296000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-296000" primary control-plane node in "docker-flags-296000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-296000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:31:57.008903    4144 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:31:57.009033    4144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:31:57.009036    4144 out.go:358] Setting ErrFile to fd 2...
	I0917 02:31:57.009038    4144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:31:57.009188    4144 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:31:57.010317    4144 out.go:352] Setting JSON to false
	I0917 02:31:57.026440    4144 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3687,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:31:57.026505    4144 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:31:57.032430    4144 out.go:177] * [docker-flags-296000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:31:57.039344    4144 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:31:57.039383    4144 notify.go:220] Checking for updates...
	I0917 02:31:57.047338    4144 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:31:57.050337    4144 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:31:57.053346    4144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:31:57.056293    4144 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:31:57.059344    4144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:31:57.062670    4144 config.go:182] Loaded profile config "force-systemd-flag-446000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:31:57.062735    4144 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:31:57.062777    4144 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:31:57.065173    4144 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:31:57.072290    4144 start.go:297] selected driver: qemu2
	I0917 02:31:57.072296    4144 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:31:57.072304    4144 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:31:57.074714    4144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:31:57.075964    4144 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:31:57.078433    4144 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0917 02:31:57.078451    4144 cni.go:84] Creating CNI manager for ""
	I0917 02:31:57.078476    4144 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:31:57.078480    4144 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:31:57.078510    4144 start.go:340] cluster config:
	{Name:docker-flags-296000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:31:57.082263    4144 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:57.089313    4144 out.go:177] * Starting "docker-flags-296000" primary control-plane node in "docker-flags-296000" cluster
	I0917 02:31:57.093304    4144 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:31:57.093325    4144 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:31:57.093333    4144 cache.go:56] Caching tarball of preloaded images
	I0917 02:31:57.093413    4144 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:31:57.093419    4144 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:31:57.093480    4144 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/docker-flags-296000/config.json ...
	I0917 02:31:57.093493    4144 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/docker-flags-296000/config.json: {Name:mkd89fc0e79aba681e14968122bf7088754d9d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:31:57.093718    4144 start.go:360] acquireMachinesLock for docker-flags-296000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:31:57.093754    4144 start.go:364] duration metric: took 29.292µs to acquireMachinesLock for "docker-flags-296000"
	I0917 02:31:57.093766    4144 start.go:93] Provisioning new machine with config: &{Name:docker-flags-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:31:57.093790    4144 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:31:57.102308    4144 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:31:57.120458    4144 start.go:159] libmachine.API.Create for "docker-flags-296000" (driver="qemu2")
	I0917 02:31:57.120489    4144 client.go:168] LocalClient.Create starting
	I0917 02:31:57.120563    4144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:31:57.120594    4144 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:57.120602    4144 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:57.120639    4144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:31:57.120664    4144 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:57.120674    4144 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:57.121077    4144 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:31:57.279825    4144 main.go:141] libmachine: Creating SSH key...
	I0917 02:31:57.454205    4144 main.go:141] libmachine: Creating Disk image...
	I0917 02:31:57.454216    4144 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:31:57.454433    4144 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2
	I0917 02:31:57.463880    4144 main.go:141] libmachine: STDOUT: 
	I0917 02:31:57.463899    4144 main.go:141] libmachine: STDERR: 
	I0917 02:31:57.463960    4144 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2 +20000M
	I0917 02:31:57.471908    4144 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:31:57.471923    4144 main.go:141] libmachine: STDERR: 
	I0917 02:31:57.471940    4144 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2
	I0917 02:31:57.471946    4144 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:31:57.471960    4144 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:31:57.471983    4144 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:cd:5e:c1:17:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2
	I0917 02:31:57.473587    4144 main.go:141] libmachine: STDOUT: 
	I0917 02:31:57.473600    4144 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:31:57.473628    4144 client.go:171] duration metric: took 353.134ms to LocalClient.Create
	I0917 02:31:59.475798    4144 start.go:128] duration metric: took 2.382000417s to createHost
	I0917 02:31:59.475842    4144 start.go:83] releasing machines lock for "docker-flags-296000", held for 2.382089417s
	W0917 02:31:59.475907    4144 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:59.493982    4144 out.go:177] * Deleting "docker-flags-296000" in qemu2 ...
	W0917 02:31:59.517076    4144 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:59.517093    4144 start.go:729] Will try again in 5 seconds ...
	I0917 02:32:04.519336    4144 start.go:360] acquireMachinesLock for docker-flags-296000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:32:04.675365    4144 start.go:364] duration metric: took 155.867792ms to acquireMachinesLock for "docker-flags-296000"
	I0917 02:32:04.675495    4144 start.go:93] Provisioning new machine with config: &{Name:docker-flags-296000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-296000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:32:04.675794    4144 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:32:04.688266    4144 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:32:04.737720    4144 start.go:159] libmachine.API.Create for "docker-flags-296000" (driver="qemu2")
	I0917 02:32:04.737762    4144 client.go:168] LocalClient.Create starting
	I0917 02:32:04.737892    4144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:32:04.737966    4144 main.go:141] libmachine: Decoding PEM data...
	I0917 02:32:04.737983    4144 main.go:141] libmachine: Parsing certificate...
	I0917 02:32:04.738052    4144 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:32:04.738101    4144 main.go:141] libmachine: Decoding PEM data...
	I0917 02:32:04.738116    4144 main.go:141] libmachine: Parsing certificate...
	I0917 02:32:04.738727    4144 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:32:04.911738    4144 main.go:141] libmachine: Creating SSH key...
	I0917 02:32:04.965746    4144 main.go:141] libmachine: Creating Disk image...
	I0917 02:32:04.965752    4144 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:32:04.965967    4144 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2
	I0917 02:32:04.975087    4144 main.go:141] libmachine: STDOUT: 
	I0917 02:32:04.975106    4144 main.go:141] libmachine: STDERR: 
	I0917 02:32:04.975157    4144 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2 +20000M
	I0917 02:32:04.982935    4144 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:32:04.982956    4144 main.go:141] libmachine: STDERR: 
	I0917 02:32:04.982967    4144 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2
	I0917 02:32:04.982971    4144 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:32:04.982979    4144 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:32:04.983015    4144 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:a8:02:3e:77:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/docker-flags-296000/disk.qcow2
	I0917 02:32:04.984611    4144 main.go:141] libmachine: STDOUT: 
	I0917 02:32:04.984624    4144 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:32:04.984636    4144 client.go:171] duration metric: took 246.870334ms to LocalClient.Create
	I0917 02:32:06.986815    4144 start.go:128] duration metric: took 2.31100225s to createHost
	I0917 02:32:06.986860    4144 start.go:83] releasing machines lock for "docker-flags-296000", held for 2.311463208s
	W0917 02:32:06.987165    4144 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-296000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-296000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:32:06.999932    4144 out.go:201] 
	W0917 02:32:07.013039    4144 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:32:07.013071    4144 out.go:270] * 
	* 
	W0917 02:32:07.015598    4144 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:32:07.022902    4144 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-296000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-296000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-296000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.223625ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-296000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-296000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-296000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-296000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-296000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-296000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.863917ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-296000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-296000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-296000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-296000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-296000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-296000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-17 02:32:07.164457 -0700 PDT m=+3282.449473918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-296000 -n docker-flags-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-296000 -n docker-flags-296000: exit status 7 (29.808667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-296000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-296000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-296000
--- FAIL: TestDockerFlags (10.29s)

                                                
                                    
x
+
TestForceSystemdFlag (10.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-446000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-446000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.244650667s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-446000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-446000" primary control-plane node in "force-systemd-flag-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:31:51.892951    4123 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:31:51.893080    4123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:31:51.893084    4123 out.go:358] Setting ErrFile to fd 2...
	I0917 02:31:51.893086    4123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:31:51.893215    4123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:31:51.894279    4123 out.go:352] Setting JSON to false
	I0917 02:31:51.910224    4123 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3681,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:31:51.910320    4123 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:31:51.916322    4123 out.go:177] * [force-systemd-flag-446000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:31:51.924253    4123 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:31:51.924304    4123 notify.go:220] Checking for updates...
	I0917 02:31:51.932140    4123 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:31:51.935176    4123 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:31:51.938244    4123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:31:51.941175    4123 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:31:51.944245    4123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:31:51.947510    4123 config.go:182] Loaded profile config "force-systemd-env-154000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:31:51.947588    4123 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:31:51.947629    4123 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:31:51.950172    4123 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:31:51.957265    4123 start.go:297] selected driver: qemu2
	I0917 02:31:51.957272    4123 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:31:51.957280    4123 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:31:51.959701    4123 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:31:51.960959    4123 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:31:51.964349    4123 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 02:31:51.964368    4123 cni.go:84] Creating CNI manager for ""
	I0917 02:31:51.964400    4123 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:31:51.964405    4123 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:31:51.964434    4123 start.go:340] cluster config:
	{Name:force-systemd-flag-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:31:51.968080    4123 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:51.975194    4123 out.go:177] * Starting "force-systemd-flag-446000" primary control-plane node in "force-systemd-flag-446000" cluster
	I0917 02:31:51.979207    4123 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:31:51.979226    4123 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:31:51.979236    4123 cache.go:56] Caching tarball of preloaded images
	I0917 02:31:51.979322    4123 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:31:51.979329    4123 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:31:51.979394    4123 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/force-systemd-flag-446000/config.json ...
	I0917 02:31:51.979405    4123 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/force-systemd-flag-446000/config.json: {Name:mk3135b0e8d3b6d767554a723e1834f18a1bd004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:31:51.979628    4123 start.go:360] acquireMachinesLock for force-systemd-flag-446000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:31:51.979670    4123 start.go:364] duration metric: took 31.5µs to acquireMachinesLock for "force-systemd-flag-446000"
	I0917 02:31:51.979683    4123 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:31:51.979722    4123 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:31:51.988190    4123 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:31:52.006628    4123 start.go:159] libmachine.API.Create for "force-systemd-flag-446000" (driver="qemu2")
	I0917 02:31:52.006659    4123 client.go:168] LocalClient.Create starting
	I0917 02:31:52.006739    4123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:31:52.006776    4123 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:52.006789    4123 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:52.006838    4123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:31:52.006862    4123 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:52.006871    4123 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:52.007231    4123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:31:52.166460    4123 main.go:141] libmachine: Creating SSH key...
	I0917 02:31:52.198084    4123 main.go:141] libmachine: Creating Disk image...
	I0917 02:31:52.198089    4123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:31:52.198291    4123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I0917 02:31:52.207356    4123 main.go:141] libmachine: STDOUT: 
	I0917 02:31:52.207374    4123 main.go:141] libmachine: STDERR: 
	I0917 02:31:52.207433    4123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2 +20000M
	I0917 02:31:52.215142    4123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:31:52.215157    4123 main.go:141] libmachine: STDERR: 
	I0917 02:31:52.215181    4123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I0917 02:31:52.215186    4123 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:31:52.215198    4123 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:31:52.215226    4123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:e2:31:cb:e9:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I0917 02:31:52.216776    4123 main.go:141] libmachine: STDOUT: 
	I0917 02:31:52.216789    4123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:31:52.216810    4123 client.go:171] duration metric: took 210.145625ms to LocalClient.Create
	I0917 02:31:54.218970    4123 start.go:128] duration metric: took 2.239239792s to createHost
	I0917 02:31:54.219094    4123 start.go:83] releasing machines lock for "force-systemd-flag-446000", held for 2.239423167s
	W0917 02:31:54.219162    4123 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:54.237240    4123 out.go:177] * Deleting "force-systemd-flag-446000" in qemu2 ...
	W0917 02:31:54.261811    4123 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:54.261830    4123 start.go:729] Will try again in 5 seconds ...
	I0917 02:31:59.264023    4123 start.go:360] acquireMachinesLock for force-systemd-flag-446000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:31:59.475974    4123 start.go:364] duration metric: took 211.844833ms to acquireMachinesLock for "force-systemd-flag-446000"
	I0917 02:31:59.476105    4123 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:31:59.476408    4123 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:31:59.482062    4123 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:31:59.529297    4123 start.go:159] libmachine.API.Create for "force-systemd-flag-446000" (driver="qemu2")
	I0917 02:31:59.529341    4123 client.go:168] LocalClient.Create starting
	I0917 02:31:59.529467    4123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:31:59.529545    4123 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:59.529591    4123 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:59.529660    4123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:31:59.529706    4123 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:59.529718    4123 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:59.530274    4123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:31:59.725096    4123 main.go:141] libmachine: Creating SSH key...
	I0917 02:32:00.034726    4123 main.go:141] libmachine: Creating Disk image...
	I0917 02:32:00.034737    4123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:32:00.034938    4123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I0917 02:32:00.044286    4123 main.go:141] libmachine: STDOUT: 
	I0917 02:32:00.044309    4123 main.go:141] libmachine: STDERR: 
	I0917 02:32:00.044380    4123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2 +20000M
	I0917 02:32:00.052263    4123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:32:00.052281    4123 main.go:141] libmachine: STDERR: 
	I0917 02:32:00.052296    4123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I0917 02:32:00.052306    4123 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:32:00.052312    4123 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:32:00.052345    4123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:0b:e3:00:6f:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-flag-446000/disk.qcow2
	I0917 02:32:00.053909    4123 main.go:141] libmachine: STDOUT: 
	I0917 02:32:00.053921    4123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:32:00.053936    4123 client.go:171] duration metric: took 524.592083ms to LocalClient.Create
	I0917 02:32:02.056221    4123 start.go:128] duration metric: took 2.57973775s to createHost
	I0917 02:32:02.056330    4123 start.go:83] releasing machines lock for "force-systemd-flag-446000", held for 2.580315875s
	W0917 02:32:02.056685    4123 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:32:02.079386    4123 out.go:201] 
	W0917 02:32:02.083430    4123 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:32:02.083484    4123 out.go:270] * 
	* 
	W0917 02:32:02.086077    4123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:32:02.095306    4123 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-446000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-446000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-446000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.149583ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-446000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-446000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-446000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-17 02:32:02.187469 -0700 PDT m=+3277.472461709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-446000 -n force-systemd-flag-446000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-446000 -n force-systemd-flag-446000: exit status 7 (34.737125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-446000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-446000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-446000
--- FAIL: TestForceSystemdFlag (10.43s)

                                                
                                    
x
+
TestForceSystemdEnv (12.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-154000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-154000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.331354041s)

                                                
                                                
-- stdout --
	* [force-systemd-env-154000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-154000" primary control-plane node in "force-systemd-env-154000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-154000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:31:44.482127    4085 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:31:44.482281    4085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:31:44.482284    4085 out.go:358] Setting ErrFile to fd 2...
	I0917 02:31:44.482286    4085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:31:44.482407    4085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:31:44.483534    4085 out.go:352] Setting JSON to false
	I0917 02:31:44.499630    4085 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3674,"bootTime":1726561830,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:31:44.499714    4085 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:31:44.505376    4085 out.go:177] * [force-systemd-env-154000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:31:44.513304    4085 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:31:44.513348    4085 notify.go:220] Checking for updates...
	I0917 02:31:44.520336    4085 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:31:44.523360    4085 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:31:44.526320    4085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:31:44.529332    4085 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:31:44.532375    4085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0917 02:31:44.535742    4085 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:31:44.535788    4085 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:31:44.540342    4085 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:31:44.547249    4085 start.go:297] selected driver: qemu2
	I0917 02:31:44.547255    4085 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:31:44.547260    4085 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:31:44.549611    4085 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:31:44.552347    4085 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:31:44.555396    4085 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 02:31:44.555410    4085 cni.go:84] Creating CNI manager for ""
	I0917 02:31:44.555432    4085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:31:44.555439    4085 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:31:44.555465    4085 start.go:340] cluster config:
	{Name:force-systemd-env-154000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-154000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:31:44.559226    4085 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:44.566361    4085 out.go:177] * Starting "force-systemd-env-154000" primary control-plane node in "force-systemd-env-154000" cluster
	I0917 02:31:44.570355    4085 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:31:44.570374    4085 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:31:44.570382    4085 cache.go:56] Caching tarball of preloaded images
	I0917 02:31:44.570455    4085 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:31:44.570461    4085 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:31:44.570523    4085 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/force-systemd-env-154000/config.json ...
	I0917 02:31:44.570540    4085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/force-systemd-env-154000/config.json: {Name:mk9203946d90af7fd9c3f0066aae2c1724ced62d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:31:44.570746    4085 start.go:360] acquireMachinesLock for force-systemd-env-154000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:31:44.570781    4085 start.go:364] duration metric: took 28.542µs to acquireMachinesLock for "force-systemd-env-154000"
	I0917 02:31:44.570793    4085 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-154000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-154000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:31:44.570825    4085 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:31:44.579364    4085 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:31:44.596831    4085 start.go:159] libmachine.API.Create for "force-systemd-env-154000" (driver="qemu2")
	I0917 02:31:44.596863    4085 client.go:168] LocalClient.Create starting
	I0917 02:31:44.596925    4085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:31:44.596955    4085 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:44.596964    4085 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:44.596999    4085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:31:44.597022    4085 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:44.597030    4085 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:44.597367    4085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:31:44.756241    4085 main.go:141] libmachine: Creating SSH key...
	I0917 02:31:45.052732    4085 main.go:141] libmachine: Creating Disk image...
	I0917 02:31:45.052741    4085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:31:45.052978    4085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2
	I0917 02:31:45.062784    4085 main.go:141] libmachine: STDOUT: 
	I0917 02:31:45.062803    4085 main.go:141] libmachine: STDERR: 
	I0917 02:31:45.062867    4085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2 +20000M
	I0917 02:31:45.070828    4085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:31:45.070841    4085 main.go:141] libmachine: STDERR: 
	I0917 02:31:45.070863    4085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2
	I0917 02:31:45.070868    4085 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:31:45.070878    4085 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:31:45.070902    4085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:b0:54:00:b9:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2
	I0917 02:31:45.072554    4085 main.go:141] libmachine: STDOUT: 
	I0917 02:31:45.072568    4085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:31:45.072586    4085 client.go:171] duration metric: took 475.721167ms to LocalClient.Create
	I0917 02:31:47.074728    4085 start.go:128] duration metric: took 2.503898083s to createHost
	I0917 02:31:47.074809    4085 start.go:83] releasing machines lock for "force-systemd-env-154000", held for 2.504029959s
	W0917 02:31:47.074875    4085 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:47.085482    4085 out.go:177] * Deleting "force-systemd-env-154000" in qemu2 ...
	W0917 02:31:47.109995    4085 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:47.110019    4085 start.go:729] Will try again in 5 seconds ...
	I0917 02:31:52.112128    4085 start.go:360] acquireMachinesLock for force-systemd-env-154000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:31:54.219263    4085 start.go:364] duration metric: took 2.107068667s to acquireMachinesLock for "force-systemd-env-154000"
	I0917 02:31:54.219410    4085 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-154000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-154000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:31:54.219707    4085 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:31:54.230258    4085 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0917 02:31:54.281455    4085 start.go:159] libmachine.API.Create for "force-systemd-env-154000" (driver="qemu2")
	I0917 02:31:54.281514    4085 client.go:168] LocalClient.Create starting
	I0917 02:31:54.281635    4085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:31:54.281694    4085 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:54.281713    4085 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:54.281778    4085 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:31:54.281825    4085 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:54.281838    4085 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:54.282391    4085 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:31:54.631285    4085 main.go:141] libmachine: Creating SSH key...
	I0917 02:31:54.709200    4085 main.go:141] libmachine: Creating Disk image...
	I0917 02:31:54.709207    4085 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:31:54.709412    4085 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2
	I0917 02:31:54.718631    4085 main.go:141] libmachine: STDOUT: 
	I0917 02:31:54.718652    4085 main.go:141] libmachine: STDERR: 
	I0917 02:31:54.718733    4085 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2 +20000M
	I0917 02:31:54.726616    4085 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:31:54.726632    4085 main.go:141] libmachine: STDERR: 
	I0917 02:31:54.726646    4085 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2
	I0917 02:31:54.726651    4085 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:31:54.726657    4085 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:31:54.726697    4085 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:f0:2a:8d:ee:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/force-systemd-env-154000/disk.qcow2
	I0917 02:31:54.728354    4085 main.go:141] libmachine: STDOUT: 
	I0917 02:31:54.728367    4085 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:31:54.728380    4085 client.go:171] duration metric: took 446.861458ms to LocalClient.Create
	I0917 02:31:56.730546    4085 start.go:128] duration metric: took 2.510820708s to createHost
	I0917 02:31:56.730608    4085 start.go:83] releasing machines lock for "force-systemd-env-154000", held for 2.511282625s
	W0917 02:31:56.730974    4085 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-154000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-154000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:56.750733    4085 out.go:201] 
	W0917 02:31:56.756811    4085 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:31:56.756840    4085 out.go:270] * 
	* 
	W0917 02:31:56.759504    4085 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:31:56.769687    4085 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-154000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-154000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-154000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.668083ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-154000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-154000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-154000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-17 02:31:56.864386 -0700 PDT m=+3272.149353584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-154000 -n force-systemd-env-154000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-154000 -n force-systemd-env-154000: exit status 7 (36.417625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-154000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-154000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-154000
--- FAIL: TestForceSystemdEnv (12.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-386000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-386000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-tp5vq" [b627a8ea-13e3-41db-8162-0b8046f3e6fb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-tp5vq" [b627a8ea-13e3-41db-8162-0b8046f3e6fb] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003824042s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:32134
functional_test.go:1661: error fetching http://192.168.105.4:32134: Get "http://192.168.105.4:32134": dial tcp 192.168.105.4:32134: connect: connection refused
E0917 01:56:59.090542    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:32134: Get "http://192.168.105.4:32134": dial tcp 192.168.105.4:32134: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32134: Get "http://192.168.105.4:32134": dial tcp 192.168.105.4:32134: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32134: Get "http://192.168.105.4:32134": dial tcp 192.168.105.4:32134: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32134: Get "http://192.168.105.4:32134": dial tcp 192.168.105.4:32134: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32134: Get "http://192.168.105.4:32134": dial tcp 192.168.105.4:32134: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32134: Get "http://192.168.105.4:32134": dial tcp 192.168.105.4:32134: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32134: Get "http://192.168.105.4:32134": dial tcp 192.168.105.4:32134: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:32134: Get "http://192.168.105.4:32134": dial tcp 192.168.105.4:32134: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-386000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-tp5vq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-386000/192.168.105.4
Start Time:       Tue, 17 Sep 2024 01:56:50 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://acf9e1ff49d10ab01cdf31fe655ce0ce80e9b9f49d7a74e5d6503eb30147aaee
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 17 Sep 2024 01:57:07 -0700
Finished:     Tue, 17 Sep 2024 01:57:07 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4l74l (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4l74l:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-tp5vq to functional-386000
Normal   Pulled     19s (x3 over 35s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    19s (x3 over 35s)  kubelet            Created container echoserver-arm
Normal   Started    19s (x3 over 35s)  kubelet            Started container echoserver-arm
Warning  BackOff    6s (x3 over 33s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-tp5vq_default(b627a8ea-13e3-41db-8162-0b8046f3e6fb)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-386000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-386000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.3.255
IPs:                      10.108.3.255
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32134/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-386000 -n functional-386000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service   | functional-386000 service                                                                                            | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:56 PDT | 17 Sep 24 01:56 PDT |
	|           | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-386000                                                                                                 | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port113827220/001:/mount-9p       |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh findmnt                                                                                        | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh findmnt                                                                                        | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT | 17 Sep 24 01:57 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh -- ls                                                                                          | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT | 17 Sep 24 01:57 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh cat                                                                                            | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT | 17 Sep 24 01:57 PDT |
	|           | /mount-9p/test-1726563436216036000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh stat                                                                                           | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT | 17 Sep 24 01:57 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh stat                                                                                           | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT | 17 Sep 24 01:57 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh sudo                                                                                           | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT | 17 Sep 24 01:57 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh findmnt                                                                                        | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-386000                                                                                                 | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3158075812/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh findmnt                                                                                        | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT | 17 Sep 24 01:57 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh -- ls                                                                                          | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT | 17 Sep 24 01:57 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh sudo                                                                                           | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-386000                                                                                                 | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2244190414/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-386000                                                                                                 | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2244190414/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-386000                                                                                                 | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2244190414/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh findmnt                                                                                        | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT | 17 Sep 24 01:57 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh findmnt                                                                                        | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT | 17 Sep 24 01:57 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-386000 ssh findmnt                                                                                        | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT | 17 Sep 24 01:57 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-386000                                                                                                 | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-386000                                                                                                 | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-386000 --dry-run                                                                                       | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-386000                                                                                                 | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-386000 | jenkins | v1.34.0 | 17 Sep 24 01:57 PDT |                     |
	|           | -p functional-386000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 01:57:24
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:57:24.496191    2375 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:57:24.496309    2375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:57:24.496312    2375 out.go:358] Setting ErrFile to fd 2...
	I0917 01:57:24.496315    2375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:57:24.496435    2375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 01:57:24.497789    2375 out.go:352] Setting JSON to false
	I0917 01:57:24.514968    2375 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1614,"bootTime":1726561830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 01:57:24.515048    2375 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:57:24.520098    2375 out.go:177] * [functional-386000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0917 01:57:24.529081    2375 notify.go:220] Checking for updates...
	I0917 01:57:24.533007    2375 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 01:57:24.536908    2375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 01:57:24.540042    2375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 01:57:24.543062    2375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:57:24.546096    2375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 01:57:24.549088    2375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:57:24.552336    2375 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 01:57:24.552628    2375 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:57:24.557038    2375 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0917 01:57:24.564018    2375 start.go:297] selected driver: qemu2
	I0917 01:57:24.564024    2375 start.go:901] validating driver "qemu2" against &{Name:functional-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:57:24.564074    2375 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:57:24.570063    2375 out.go:201] 
	W0917 01:57:24.574013    2375 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 01:57:24.578100    2375 out.go:201] 
	
	
	==> Docker <==
	Sep 17 08:57:19 functional-386000 dockerd[6066]: time="2024-09-17T08:57:19.061240266Z" level=warning msg="cleaning up after shim disconnected" id=abffc9157088226a3c1d1a03ee347fd8062231a13be7582c25d87420533fb08f namespace=moby
	Sep 17 08:57:19 functional-386000 dockerd[6066]: time="2024-09-17T08:57:19.061244516Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:57:20 functional-386000 dockerd[6059]: time="2024-09-17T08:57:20.691334783Z" level=info msg="ignoring event" container=dbef08f76463981d21ff8537c1e24829df0c2f7ff05d61dc664943057018fa3f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:57:20 functional-386000 dockerd[6066]: time="2024-09-17T08:57:20.691603938Z" level=info msg="shim disconnected" id=dbef08f76463981d21ff8537c1e24829df0c2f7ff05d61dc664943057018fa3f namespace=moby
	Sep 17 08:57:20 functional-386000 dockerd[6066]: time="2024-09-17T08:57:20.691640187Z" level=warning msg="cleaning up after shim disconnected" id=dbef08f76463981d21ff8537c1e24829df0c2f7ff05d61dc664943057018fa3f namespace=moby
	Sep 17 08:57:20 functional-386000 dockerd[6066]: time="2024-09-17T08:57:20.691645353Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:57:24 functional-386000 dockerd[6066]: time="2024-09-17T08:57:24.623821629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 08:57:24 functional-386000 dockerd[6066]: time="2024-09-17T08:57:24.623872543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 08:57:24 functional-386000 dockerd[6066]: time="2024-09-17T08:57:24.623892001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 08:57:24 functional-386000 dockerd[6066]: time="2024-09-17T08:57:24.623927666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 08:57:24 functional-386000 dockerd[6059]: time="2024-09-17T08:57:24.661602571Z" level=info msg="ignoring event" container=98a1dc188e18d3e18cf4384f1f0fcf0a68aad11020be9d8f0d0a4319fa942837 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 08:57:24 functional-386000 dockerd[6066]: time="2024-09-17T08:57:24.661796896Z" level=info msg="shim disconnected" id=98a1dc188e18d3e18cf4384f1f0fcf0a68aad11020be9d8f0d0a4319fa942837 namespace=moby
	Sep 17 08:57:24 functional-386000 dockerd[6066]: time="2024-09-17T08:57:24.661855935Z" level=warning msg="cleaning up after shim disconnected" id=98a1dc188e18d3e18cf4384f1f0fcf0a68aad11020be9d8f0d0a4319fa942837 namespace=moby
	Sep 17 08:57:24 functional-386000 dockerd[6066]: time="2024-09-17T08:57:24.661865476Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 17 08:57:25 functional-386000 dockerd[6066]: time="2024-09-17T08:57:25.426202655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 08:57:25 functional-386000 dockerd[6066]: time="2024-09-17T08:57:25.426336691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 08:57:25 functional-386000 dockerd[6066]: time="2024-09-17T08:57:25.426352482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 08:57:25 functional-386000 dockerd[6066]: time="2024-09-17T08:57:25.426421604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 08:57:25 functional-386000 dockerd[6066]: time="2024-09-17T08:57:25.460213632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 17 08:57:25 functional-386000 dockerd[6066]: time="2024-09-17T08:57:25.460328335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 17 08:57:25 functional-386000 dockerd[6066]: time="2024-09-17T08:57:25.460353126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 08:57:25 functional-386000 dockerd[6066]: time="2024-09-17T08:57:25.460410998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 08:57:25 functional-386000 cri-dockerd[6311]: time="2024-09-17T08:57:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/51a6ac21a4cd1d62064a30fc517dcb5f6dce61f9a88b3801135abe890d3b5fb4/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 08:57:25 functional-386000 cri-dockerd[6311]: time="2024-09-17T08:57:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5da1ee5793f074f01a6de384af5213ac947efe187ccd73cc1f68a05f5e18d5b6/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 08:57:25 functional-386000 dockerd[6059]: time="2024-09-17T08:57:25.718632023Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	98a1dc188e18d       72565bf5bbedf                                                                                         2 seconds ago        Exited              echoserver-arm            3                   c4b998f03720f       hello-node-64b4f8f9ff-knlnd
	abffc91570882       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 seconds ago        Exited              mount-munger              0                   dbef08f764639       busybox-mount
	486785d295f88       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                         17 seconds ago       Running             myfrontend                0                   9cbd180490932       sp-pod
	acf9e1ff49d10       72565bf5bbedf                                                                                         19 seconds ago       Exited              echoserver-arm            2                   280bd36c02eb3       hello-node-connect-65d86f57f4-tp5vq
	be0cfbde8650b       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                         42 seconds ago       Running             nginx                     0                   609a1ef05eeb0       nginx-svc
	3b498d0a23a12       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   09487b844db1d       coredns-7c65d6cfc9-x66vn
	eb1d5a1443b36       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   596eef22b7e71       storage-provisioner
	0483c8d118c71       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   bc6dd5fa124f6       kube-proxy-49xrq
	072dca59ab5b8       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   2d87b3c7bf862       kube-scheduler-functional-386000
	52c3a38b7d1bd       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   424324e9b0a25       kube-controller-manager-functional-386000
	750fd2c96a824       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   59646b8457e9e       etcd-functional-386000
	ded3bdc4b29a3       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   6c50c87fd5797       kube-apiserver-functional-386000
	254b3178f45fc       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   df915b4aef31c       storage-provisioner
	b743b30cad7c2       2f6c962e7b831                                                                                         About a minute ago   Exited              coredns                   1                   f5780ac7458d0       coredns-7c65d6cfc9-x66vn
	aa3945b82ef28       24a140c548c07                                                                                         About a minute ago   Exited              kube-proxy                1                   42b878e08c66a       kube-proxy-49xrq
	f14b6e8c009bb       7f8aa378bb47d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   bdcdd618bfa23       kube-scheduler-functional-386000
	657dfbe684182       279f381cb3736                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   198e585008231       kube-controller-manager-functional-386000
	c693a0a219184       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   f0e70bed2bf87       etcd-functional-386000
	
	
	==> coredns [3b498d0a23a1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41353 - 3668 "HINFO IN 4422161568947349129.1430598719196774114. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009380534s
	[INFO] 10.244.0.1:14915 - 20086 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000097537s
	[INFO] 10.244.0.1:59432 - 25399 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000086954s
	[INFO] 10.244.0.1:37235 - 32073 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.00002654s
	[INFO] 10.244.0.1:1507 - 22610 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001018246s
	[INFO] 10.244.0.1:13527 - 29546 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000052789s
	[INFO] 10.244.0.1:59003 - 20608 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000079996s
	
	
	==> coredns [b743b30cad7c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55778 - 2277 "HINFO IN 9192629514286605768.1756888316420195608. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009011939s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-386000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-386000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=functional-386000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T01_54_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 08:54:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-386000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 08:57:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 08:57:14 +0000   Tue, 17 Sep 2024 08:54:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 08:57:14 +0000   Tue, 17 Sep 2024 08:54:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 08:57:14 +0000   Tue, 17 Sep 2024 08:54:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 08:57:14 +0000   Tue, 17 Sep 2024 08:54:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-386000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 5fc993f78e104c3193ad9c41a42edff9
	  System UUID:                5fc993f78e104c3193ad9c41a42edff9
	  Boot ID:                    e163c707-6147-45d2-b781-c46ede9f30eb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-knlnd                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  default                     hello-node-connect-65d86f57f4-tp5vq          0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 coredns-7c65d6cfc9-x66vn                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m56s
	  kube-system                 etcd-functional-386000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m2s
	  kube-system                 kube-apiserver-functional-386000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-controller-manager-functional-386000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 kube-proxy-49xrq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 kube-scheduler-functional-386000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-54blx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-t2dgn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  Starting                 72s                  kube-proxy       
	  Normal  Starting                 118s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s                 kubelet          Node functional-386000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    3m2s                 kubelet          Node functional-386000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s                 kubelet          Node functional-386000 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m2s                 kubelet          Starting kubelet.
	  Normal  NodeReady                2m58s                kubelet          Node functional-386000 status is now: NodeReady
	  Normal  RegisteredNode           2m57s                node-controller  Node functional-386000 event: Registered Node functional-386000 in Controller
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node functional-386000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node functional-386000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node functional-386000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           116s                 node-controller  Node functional-386000 event: Registered Node functional-386000 in Controller
	  Normal  Starting                 77s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)    kubelet          Node functional-386000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)    kubelet          Node functional-386000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)    kubelet          Node functional-386000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           70s                  node-controller  Node functional-386000 event: Registered Node functional-386000 in Controller
	
	
	==> dmesg <==
	[  +6.948705] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.743780] systemd-fstab-generator[5156]: Ignoring "noauto" option for root device
	[ +10.809906] systemd-fstab-generator[5582]: Ignoring "noauto" option for root device
	[  +0.056931] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.094670] systemd-fstab-generator[5616]: Ignoring "noauto" option for root device
	[  +0.110538] systemd-fstab-generator[5628]: Ignoring "noauto" option for root device
	[  +0.091801] systemd-fstab-generator[5642]: Ignoring "noauto" option for root device
	[  +5.141299] kauditd_printk_skb: 89 callbacks suppressed
	[Sep17 08:56] systemd-fstab-generator[6264]: Ignoring "noauto" option for root device
	[  +0.088227] systemd-fstab-generator[6276]: Ignoring "noauto" option for root device
	[  +0.068992] systemd-fstab-generator[6288]: Ignoring "noauto" option for root device
	[  +0.084600] systemd-fstab-generator[6303]: Ignoring "noauto" option for root device
	[  +0.224498] systemd-fstab-generator[6468]: Ignoring "noauto" option for root device
	[  +1.155664] systemd-fstab-generator[6590]: Ignoring "noauto" option for root device
	[  +4.411330] kauditd_printk_skb: 199 callbacks suppressed
	[  +9.975422] kauditd_printk_skb: 33 callbacks suppressed
	[  +2.700944] systemd-fstab-generator[7598]: Ignoring "noauto" option for root device
	[  +4.841234] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.040501] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.116911] kauditd_printk_skb: 22 callbacks suppressed
	[  +9.368055] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.898684] kauditd_printk_skb: 32 callbacks suppressed
	[Sep17 08:57] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.085069] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.161923] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [750fd2c96a82] <==
	{"level":"info","ts":"2024-09-17T08:56:10.664100Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T08:56:10.665302Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T08:56:10.667946Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T08:56:10.667999Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-17T08:56:10.668058Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-17T08:56:10.668709Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T08:56:10.669168Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T08:56:12.527657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-17T08:56:12.527853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-17T08:56:12.527917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-17T08:56:12.527953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-17T08:56:12.528011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-17T08:56:12.528066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-17T08:56:12.528122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-17T08:56:12.535372Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-386000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T08:56:12.535565Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T08:56:12.536002Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T08:56:12.536218Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T08:56:12.536114Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T08:56:12.537577Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T08:56:12.537853Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T08:56:12.540283Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T08:56:12.541129Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-17T08:56:57.739370Z","caller":"traceutil/trace.go:171","msg":"trace[1547081671] transaction","detail":"{read_only:false; response_revision:786; number_of_response:1; }","duration":"130.48128ms","start":"2024-09-17T08:56:57.608880Z","end":"2024-09-17T08:56:57.739361Z","steps":["trace[1547081671] 'process raft request'  (duration: 130.280956ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T08:56:59.921372Z","caller":"traceutil/trace.go:171","msg":"trace[1620996942] transaction","detail":"{read_only:false; response_revision:792; number_of_response:1; }","duration":"175.857063ms","start":"2024-09-17T08:56:59.745505Z","end":"2024-09-17T08:56:59.921362Z","steps":["trace[1620996942] 'process raft request'  (duration: 148.255767ms)","trace[1620996942] 'compare'  (duration: 27.443178ms)"],"step_count":2}
	
	
	==> etcd [c693a0a21918] <==
	{"level":"info","ts":"2024-09-17T08:55:26.833890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T08:55:26.833953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-17T08:55:26.834299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-17T08:55:26.834321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-17T08:55:26.834351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-17T08:55:26.834429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-17T08:55:26.839326Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-386000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T08:55:26.839778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T08:55:26.839811Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T08:55:26.840186Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T08:55:26.839903Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T08:55:26.842315Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T08:55:26.842316Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T08:55:26.844498Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-17T08:55:26.846005Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T08:55:55.498104Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-17T08:55:55.498133Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-386000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-17T08:55:55.498187Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T08:55:55.498228Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T08:55:55.512067Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-17T08:55:55.512096Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-17T08:55:55.512117Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-17T08:55:55.513348Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-17T08:55:55.513380Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-17T08:55:55.513384Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-386000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 08:57:26 up 3 min,  0 users,  load average: 0.76, 0.54, 0.23
	Linux functional-386000 5.10.207 #1 SMP PREEMPT Sun Sep 15 17:39:25 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ded3bdc4b29a] <==
	I0917 08:56:13.172169       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 08:56:13.172172       1 cache.go:39] Caches are synced for autoregister controller
	I0917 08:56:13.181305       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 08:56:13.181313       1 policy_source.go:224] refreshing policies
	I0917 08:56:13.181351       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 08:56:13.197663       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 08:56:14.052749       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0917 08:56:14.160710       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0917 08:56:14.161350       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 08:56:14.163128       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 08:56:14.724964       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 08:56:14.728999       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 08:56:14.739260       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 08:56:14.746130       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 08:56:14.748120       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 08:56:31.393147       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.250.80"}
	I0917 08:56:36.389075       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0917 08:56:36.432711       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.63.48"}
	I0917 08:56:40.472762       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.254.25"}
	I0917 08:56:50.918065       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.3.255"}
	E0917 08:57:07.294188       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49754: use of closed network connection
	E0917 08:57:15.572547       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49759: use of closed network connection
	I0917 08:57:25.011214       1 controller.go:615] quota admission added evaluator for: namespaces
	I0917 08:57:25.094252       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.83.136"}
	I0917 08:57:25.105482       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.204.113"}
	
	
	==> kube-controller-manager [52c3a38b7d1b] <==
	I0917 08:57:14.717805       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-386000"
	I0917 08:57:20.584868       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="52.623µs"
	I0917 08:57:25.036230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.520973ms"
	E0917 08:57:25.036277       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 08:57:25.043229       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.75009ms"
	E0917 08:57:25.043247       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 08:57:25.046434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.11899ms"
	E0917 08:57:25.046450       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 08:57:25.048874       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.47131ms"
	E0917 08:57:25.048890       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 08:57:25.058763       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.72892ms"
	E0917 08:57:25.058787       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 08:57:25.062737       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.531558ms"
	E0917 08:57:25.062758       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 08:57:25.063764       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.262612ms"
	E0917 08:57:25.063779       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0917 08:57:25.077427       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.042911ms"
	I0917 08:57:25.082833       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.351606ms"
	I0917 08:57:25.082919       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="37.624µs"
	I0917 08:57:25.082957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.958µs"
	I0917 08:57:25.089028       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="20.625µs"
	I0917 08:57:25.132201       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="26.854196ms"
	I0917 08:57:25.143024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.759211ms"
	I0917 08:57:25.143076       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="32.79µs"
	I0917 08:57:25.676449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="26.082µs"
	
	
	==> kube-controller-manager [657dfbe68418] <==
	I0917 08:55:30.704510       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0917 08:55:30.704512       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0917 08:55:30.704564       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-386000"
	I0917 08:55:30.713718       1 shared_informer.go:320] Caches are synced for daemon sets
	I0917 08:55:30.713761       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0917 08:55:30.714269       1 shared_informer.go:320] Caches are synced for HPA
	I0917 08:55:30.714300       1 shared_informer.go:320] Caches are synced for job
	I0917 08:55:30.716208       1 shared_informer.go:320] Caches are synced for namespace
	I0917 08:55:30.766124       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0917 08:55:30.789626       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0917 08:55:30.789678       1 shared_informer.go:320] Caches are synced for persistent volume
	I0917 08:55:30.815161       1 shared_informer.go:320] Caches are synced for endpoint
	I0917 08:55:30.879042       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 08:55:30.894004       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0917 08:55:30.899428       1 shared_informer.go:320] Caches are synced for deployment
	I0917 08:55:30.914930       1 shared_informer.go:320] Caches are synced for disruption
	I0917 08:55:30.914992       1 shared_informer.go:320] Caches are synced for attach detach
	I0917 08:55:30.917989       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 08:55:31.072083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="178.010533ms"
	I0917 08:55:31.072575       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="45.748µs"
	I0917 08:55:31.329120       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 08:55:31.364948       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 08:55:31.365010       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 08:55:35.488085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.618067ms"
	I0917 08:55:35.488260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="151.244µs"
	
	
	==> kube-proxy [0483c8d118c7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 08:56:14.150791       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 08:56:14.154334       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0917 08:56:14.154415       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 08:56:14.163308       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 08:56:14.163353       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 08:56:14.163371       1 server_linux.go:169] "Using iptables Proxier"
	I0917 08:56:14.164481       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 08:56:14.164606       1 server.go:483] "Version info" version="v1.31.1"
	I0917 08:56:14.164614       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 08:56:14.165045       1 config.go:199] "Starting service config controller"
	I0917 08:56:14.165059       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 08:56:14.165068       1 config.go:105] "Starting endpoint slice config controller"
	I0917 08:56:14.165071       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 08:56:14.165288       1 config.go:328] "Starting node config controller"
	I0917 08:56:14.165295       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 08:56:14.265686       1 shared_informer.go:320] Caches are synced for node config
	I0917 08:56:14.265704       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 08:56:14.265766       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [aa3945b82ef2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 08:55:28.754215       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 08:55:28.760565       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0917 08:55:28.760599       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 08:55:28.852568       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0917 08:55:28.852592       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 08:55:28.852611       1 server_linux.go:169] "Using iptables Proxier"
	I0917 08:55:28.854650       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 08:55:28.854744       1 server.go:483] "Version info" version="v1.31.1"
	I0917 08:55:28.854749       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 08:55:28.855499       1 config.go:199] "Starting service config controller"
	I0917 08:55:28.855504       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 08:55:28.855514       1 config.go:105] "Starting endpoint slice config controller"
	I0917 08:55:28.855516       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 08:55:28.855659       1 config.go:328] "Starting node config controller"
	I0917 08:55:28.855662       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 08:55:28.956040       1 shared_informer.go:320] Caches are synced for node config
	I0917 08:55:28.956040       1 shared_informer.go:320] Caches are synced for service config
	I0917 08:55:28.956050       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [072dca59ab5b] <==
	I0917 08:56:10.929467       1 serving.go:386] Generated self-signed cert in-memory
	W0917 08:56:13.090388       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 08:56:13.090408       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 08:56:13.090413       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 08:56:13.090416       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 08:56:13.103108       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 08:56:13.103123       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 08:56:13.104657       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 08:56:13.104709       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 08:56:13.104721       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 08:56:13.104731       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 08:56:13.205026       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f14b6e8c009b] <==
	I0917 08:55:25.208222       1 serving.go:386] Generated self-signed cert in-memory
	W0917 08:55:27.365353       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 08:55:27.365454       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 08:55:27.365470       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 08:55:27.365478       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 08:55:27.396230       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 08:55:27.396377       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 08:55:27.397336       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 08:55:27.397381       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 08:55:27.398087       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 08:55:27.397389       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 08:55:27.498981       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 08:55:55.480634       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 08:55:55.480837       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0917 08:55:55.480914       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 08:57:09 functional-386000 kubelet[6597]: I0917 08:57:09.658661    6597 scope.go:117] "RemoveContainer" containerID="ecd4920f8110e63ad7ced0e742a4c1051ca6eb89dc91ccee695194822f07e19c"
	Sep 17 08:57:11 functional-386000 kubelet[6597]: I0917 08:57:11.569634    6597 scope.go:117] "RemoveContainer" containerID="d523792e10672caa485f42530560fe03f2cf7c856173908b1c777fbbed265a57"
	Sep 17 08:57:11 functional-386000 kubelet[6597]: E0917 08:57:11.572955    6597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-knlnd_default(0d12a9a9-f9a6-4216-9126-7424705637e1)\"" pod="default/hello-node-64b4f8f9ff-knlnd" podUID="0d12a9a9-f9a6-4216-9126-7424705637e1"
	Sep 17 08:57:11 functional-386000 kubelet[6597]: I0917 08:57:11.585028    6597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.797670724 podStartE2EDuration="3.585003611s" podCreationTimestamp="2024-09-17 08:57:08 +0000 UTC" firstStartedPulling="2024-09-17 08:57:08.851578081 +0000 UTC m=+59.334969186" lastFinishedPulling="2024-09-17 08:57:09.638910968 +0000 UTC m=+60.122302073" observedRunningTime="2024-09-17 08:57:10.465627849 +0000 UTC m=+60.949018996" watchObservedRunningTime="2024-09-17 08:57:11.585003611 +0000 UTC m=+62.068394717"
	Sep 17 08:57:17 functional-386000 kubelet[6597]: I0917 08:57:17.177962    6597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/fae33927-0489-4190-ad87-5d93c7f159a3-test-volume\") pod \"busybox-mount\" (UID: \"fae33927-0489-4190-ad87-5d93c7f159a3\") " pod="default/busybox-mount"
	Sep 17 08:57:17 functional-386000 kubelet[6597]: I0917 08:57:17.177995    6597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2cnb\" (UniqueName: \"kubernetes.io/projected/fae33927-0489-4190-ad87-5d93c7f159a3-kube-api-access-r2cnb\") pod \"busybox-mount\" (UID: \"fae33927-0489-4190-ad87-5d93c7f159a3\") " pod="default/busybox-mount"
	Sep 17 08:57:20 functional-386000 kubelet[6597]: I0917 08:57:20.569664    6597 scope.go:117] "RemoveContainer" containerID="acf9e1ff49d10ab01cdf31fe655ce0ce80e9b9f49d7a74e5d6503eb30147aaee"
	Sep 17 08:57:20 functional-386000 kubelet[6597]: E0917 08:57:20.570001    6597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-tp5vq_default(b627a8ea-13e3-41db-8162-0b8046f3e6fb)\"" pod="default/hello-node-connect-65d86f57f4-tp5vq" podUID="b627a8ea-13e3-41db-8162-0b8046f3e6fb"
	Sep 17 08:57:20 functional-386000 kubelet[6597]: I0917 08:57:20.812306    6597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2cnb\" (UniqueName: \"kubernetes.io/projected/fae33927-0489-4190-ad87-5d93c7f159a3-kube-api-access-r2cnb\") pod \"fae33927-0489-4190-ad87-5d93c7f159a3\" (UID: \"fae33927-0489-4190-ad87-5d93c7f159a3\") "
	Sep 17 08:57:20 functional-386000 kubelet[6597]: I0917 08:57:20.812561    6597 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/fae33927-0489-4190-ad87-5d93c7f159a3-test-volume\") pod \"fae33927-0489-4190-ad87-5d93c7f159a3\" (UID: \"fae33927-0489-4190-ad87-5d93c7f159a3\") "
	Sep 17 08:57:20 functional-386000 kubelet[6597]: I0917 08:57:20.812595    6597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fae33927-0489-4190-ad87-5d93c7f159a3-test-volume" (OuterVolumeSpecName: "test-volume") pod "fae33927-0489-4190-ad87-5d93c7f159a3" (UID: "fae33927-0489-4190-ad87-5d93c7f159a3"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 08:57:20 functional-386000 kubelet[6597]: I0917 08:57:20.813460    6597 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae33927-0489-4190-ad87-5d93c7f159a3-kube-api-access-r2cnb" (OuterVolumeSpecName: "kube-api-access-r2cnb") pod "fae33927-0489-4190-ad87-5d93c7f159a3" (UID: "fae33927-0489-4190-ad87-5d93c7f159a3"). InnerVolumeSpecName "kube-api-access-r2cnb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 08:57:20 functional-386000 kubelet[6597]: I0917 08:57:20.913635    6597 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r2cnb\" (UniqueName: \"kubernetes.io/projected/fae33927-0489-4190-ad87-5d93c7f159a3-kube-api-access-r2cnb\") on node \"functional-386000\" DevicePath \"\""
	Sep 17 08:57:20 functional-386000 kubelet[6597]: I0917 08:57:20.913654    6597 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/fae33927-0489-4190-ad87-5d93c7f159a3-test-volume\") on node \"functional-386000\" DevicePath \"\""
	Sep 17 08:57:21 functional-386000 kubelet[6597]: I0917 08:57:21.612029    6597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbef08f76463981d21ff8537c1e24829df0c2f7ff05d61dc664943057018fa3f"
	Sep 17 08:57:24 functional-386000 kubelet[6597]: I0917 08:57:24.568663    6597 scope.go:117] "RemoveContainer" containerID="d523792e10672caa485f42530560fe03f2cf7c856173908b1c777fbbed265a57"
	Sep 17 08:57:25 functional-386000 kubelet[6597]: E0917 08:57:25.079307    6597 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fae33927-0489-4190-ad87-5d93c7f159a3" containerName="mount-munger"
	Sep 17 08:57:25 functional-386000 kubelet[6597]: I0917 08:57:25.079348    6597 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae33927-0489-4190-ad87-5d93c7f159a3" containerName="mount-munger"
	Sep 17 08:57:25 functional-386000 kubelet[6597]: I0917 08:57:25.149925    6597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqlbf\" (UniqueName: \"kubernetes.io/projected/50fee47c-075f-4d67-bfba-8492d890dd32-kube-api-access-mqlbf\") pod \"kubernetes-dashboard-695b96c756-t2dgn\" (UID: \"50fee47c-075f-4d67-bfba-8492d890dd32\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-t2dgn"
	Sep 17 08:57:25 functional-386000 kubelet[6597]: I0917 08:57:25.149954    6597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/50fee47c-075f-4d67-bfba-8492d890dd32-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-t2dgn\" (UID: \"50fee47c-075f-4d67-bfba-8492d890dd32\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-t2dgn"
	Sep 17 08:57:25 functional-386000 kubelet[6597]: I0917 08:57:25.250950    6597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tfdq\" (UniqueName: \"kubernetes.io/projected/5f42fc25-8e43-42d6-9108-1775fe6452b1-kube-api-access-7tfdq\") pod \"dashboard-metrics-scraper-c5db448b4-54blx\" (UID: \"5f42fc25-8e43-42d6-9108-1775fe6452b1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-54blx"
	Sep 17 08:57:25 functional-386000 kubelet[6597]: I0917 08:57:25.250992    6597 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5f42fc25-8e43-42d6-9108-1775fe6452b1-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-54blx\" (UID: \"5f42fc25-8e43-42d6-9108-1775fe6452b1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-54blx"
	Sep 17 08:57:25 functional-386000 kubelet[6597]: I0917 08:57:25.670108    6597 scope.go:117] "RemoveContainer" containerID="d523792e10672caa485f42530560fe03f2cf7c856173908b1c777fbbed265a57"
	Sep 17 08:57:25 functional-386000 kubelet[6597]: I0917 08:57:25.670497    6597 scope.go:117] "RemoveContainer" containerID="98a1dc188e18d3e18cf4384f1f0fcf0a68aad11020be9d8f0d0a4319fa942837"
	Sep 17 08:57:25 functional-386000 kubelet[6597]: E0917 08:57:25.670647    6597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-knlnd_default(0d12a9a9-f9a6-4216-9126-7424705637e1)\"" pod="default/hello-node-64b4f8f9ff-knlnd" podUID="0d12a9a9-f9a6-4216-9126-7424705637e1"
	
	
	==> storage-provisioner [254b3178f45f] <==
	I0917 08:55:42.330522       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 08:55:42.334881       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 08:55:42.334899       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [eb1d5a1443b3] <==
	I0917 08:56:14.060742       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 08:56:14.079882       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 08:56:14.080007       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 08:56:31.501339       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 08:56:31.501452       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-386000_8d281cb8-d3c3-4550-bc6d-eec5d1e0af15!
	I0917 08:56:31.501698       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"08a859e6-6c75-4e6f-8601-c4972d85adfd", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-386000_8d281cb8-d3c3-4550-bc6d-eec5d1e0af15 became leader
	I0917 08:56:31.601703       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-386000_8d281cb8-d3c3-4550-bc6d-eec5d1e0af15!
	I0917 08:56:54.087598       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0917 08:56:54.088564       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"7422378a-d4f3-4892-a373-f4a97e4dadd6", APIVersion:"v1", ResourceVersion:"769", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0917 08:56:54.087735       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    f6b80632-c96d-4974-b9c1-f765e5714632 346 0 2024-09-17 08:54:30 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-17 08:54:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-7422378a-d4f3-4892-a373-f4a97e4dadd6 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  7422378a-d4f3-4892-a373-f4a97e4dadd6 769 0 2024-09-17 08:56:54 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-17 08:56:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-17 08:56:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0917 08:56:54.089479       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-7422378a-d4f3-4892-a373-f4a97e4dadd6" provisioned
	I0917 08:56:54.089508       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0917 08:56:54.089515       1 volume_store.go:212] Trying to save persistentvolume "pvc-7422378a-d4f3-4892-a373-f4a97e4dadd6"
	I0917 08:56:54.094315       1 volume_store.go:219] persistentvolume "pvc-7422378a-d4f3-4892-a373-f4a97e4dadd6" saved
	I0917 08:56:54.094803       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"7422378a-d4f3-4892-a373-f4a97e4dadd6", APIVersion:"v1", ResourceVersion:"769", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-7422378a-d4f3-4892-a373-f4a97e4dadd6
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-386000 -n functional-386000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-386000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-54blx kubernetes-dashboard-695b96c756-t2dgn
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-386000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-54blx kubernetes-dashboard-695b96c756-t2dgn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-386000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-54blx kubernetes-dashboard-695b96c756-t2dgn: exit status 1 (45.5545ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-386000/192.168.105.4
	Start Time:       Tue, 17 Sep 2024 01:57:17 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://abffc9157088226a3c1d1a03ee347fd8062231a13be7582c25d87420533fb08f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 17 Sep 2024 01:57:19 -0700
	      Finished:     Tue, 17 Sep 2024 01:57:19 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r2cnb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-r2cnb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/busybox-mount to functional-386000
	  Normal  Pulling    10s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.466s (1.466s including waiting). Image size: 3547125 bytes.
	  Normal  Created    8s    kubelet            Created container mount-munger
	  Normal  Started    8s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-54blx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-t2dgn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-386000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-54blx kubernetes-dashboard-695b96c756-t2dgn: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (36.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 node stop m02 -v=7 --alsologtostderr
E0917 02:01:36.445107    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:36.452748    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:36.466171    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:36.489524    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:36.532869    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:36.616308    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:36.778699    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:37.102099    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:37.745520    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:39.028922    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:41.592321    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-753000 node stop m02 -v=7 --alsologtostderr: (12.18860525s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr
E0917 02:01:45.821205    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:46.714795    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:01:56.958306    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:02:17.441201    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:02:58.404624    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:04:20.328199    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr: exit status 7 (3m45.063221667s)

                                                
                                                
-- stdout --
	ha-753000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-753000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-753000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-753000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:01:44.206657    2985 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:01:44.206839    2985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:01:44.206843    2985 out.go:358] Setting ErrFile to fd 2...
	I0917 02:01:44.206846    2985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:01:44.206998    2985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:01:44.207161    2985 out.go:352] Setting JSON to false
	I0917 02:01:44.207176    2985 mustload.go:65] Loading cluster: ha-753000
	I0917 02:01:44.207217    2985 notify.go:220] Checking for updates...
	I0917 02:01:44.207470    2985 config.go:182] Loaded profile config "ha-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:01:44.207478    2985 status.go:255] checking status of ha-753000 ...
	I0917 02:01:44.208287    2985 status.go:330] ha-753000 host status = "Running" (err=<nil>)
	I0917 02:01:44.208297    2985 host.go:66] Checking if "ha-753000" exists ...
	I0917 02:01:44.208420    2985 host.go:66] Checking if "ha-753000" exists ...
	I0917 02:01:44.208550    2985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:01:44.208560    2985 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000/id_rsa Username:docker}
	W0917 02:02:59.211064    2985 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0917 02:02:59.211126    2985 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0917 02:02:59.211135    2985 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0917 02:02:59.211139    2985 status.go:257] ha-753000 status: &{Name:ha-753000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 02:02:59.211148    2985 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0917 02:02:59.211152    2985 status.go:255] checking status of ha-753000-m02 ...
	I0917 02:02:59.211350    2985 status.go:330] ha-753000-m02 host status = "Stopped" (err=<nil>)
	I0917 02:02:59.211355    2985 status.go:343] host is not running, skipping remaining checks
	I0917 02:02:59.211357    2985 status.go:257] ha-753000-m02 status: &{Name:ha-753000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:02:59.211362    2985 status.go:255] checking status of ha-753000-m03 ...
	I0917 02:02:59.211917    2985 status.go:330] ha-753000-m03 host status = "Running" (err=<nil>)
	I0917 02:02:59.211922    2985 host.go:66] Checking if "ha-753000-m03" exists ...
	I0917 02:02:59.212012    2985 host.go:66] Checking if "ha-753000-m03" exists ...
	I0917 02:02:59.212122    2985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:02:59.212128    2985 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m03/id_rsa Username:docker}
	W0917 02:04:14.213513    2985 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0917 02:04:14.213589    2985 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0917 02:04:14.213603    2985 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0917 02:04:14.213607    2985 status.go:257] ha-753000-m03 status: &{Name:ha-753000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 02:04:14.213620    2985 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0917 02:04:14.213627    2985 status.go:255] checking status of ha-753000-m04 ...
	I0917 02:04:14.214434    2985 status.go:330] ha-753000-m04 host status = "Running" (err=<nil>)
	I0917 02:04:14.214442    2985 host.go:66] Checking if "ha-753000-m04" exists ...
	I0917 02:04:14.214564    2985 host.go:66] Checking if "ha-753000-m04" exists ...
	I0917 02:04:14.214695    2985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:04:14.214705    2985 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m04/id_rsa Username:docker}
	W0917 02:05:29.216459    2985 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0917 02:05:29.216646    2985 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0917 02:05:29.216683    2985 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0917 02:05:29.216704    2985 status.go:257] ha-753000-m04 status: &{Name:ha-753000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0917 02:05:29.216746    2985 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr": ha-753000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-753000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-753000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-753000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr": ha-753000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-753000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-753000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-753000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr": ha-753000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-753000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-753000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-753000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000
E0917 02:06:18.090764    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:06:36.443473    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000: exit status 3 (1m15.074223667s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 02:06:44.287630    3005 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0917 02:06:44.287662    3005 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-753000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0917 02:07:04.156212    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.086241s)
ha_test.go:413: expected profile "ha-753000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-753000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-753000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-753000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000: exit status 3 (1m15.042809208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 02:10:29.401209    3039 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0917 02:10:29.401265    3039 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-753000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-753000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.140252625s)

                                                
                                                
-- stdout --
	* Starting "ha-753000-m02" control-plane node in "ha-753000" cluster
	* Restarting existing qemu2 VM for "ha-753000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-753000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:10:29.474567    3047 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:10:29.474905    3047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:29.474916    3047 out.go:358] Setting ErrFile to fd 2...
	I0917 02:10:29.474920    3047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:29.475111    3047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:10:29.475448    3047 mustload.go:65] Loading cluster: ha-753000
	I0917 02:10:29.475802    3047 config.go:182] Loaded profile config "ha-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0917 02:10:29.476116    3047 host.go:58] "ha-753000-m02" host status: Stopped
	I0917 02:10:29.480588    3047 out.go:177] * Starting "ha-753000-m02" control-plane node in "ha-753000" cluster
	I0917 02:10:29.484454    3047 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:10:29.484469    3047 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:10:29.484477    3047 cache.go:56] Caching tarball of preloaded images
	I0917 02:10:29.484565    3047 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:10:29.484572    3047 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:10:29.484641    3047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/ha-753000/config.json ...
	I0917 02:10:29.485178    3047 start.go:360] acquireMachinesLock for ha-753000-m02: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:10:29.485258    3047 start.go:364] duration metric: took 42.5µs to acquireMachinesLock for "ha-753000-m02"
	I0917 02:10:29.485268    3047 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:10:29.485275    3047 fix.go:54] fixHost starting: m02
	I0917 02:10:29.485404    3047 fix.go:112] recreateIfNeeded on ha-753000-m02: state=Stopped err=<nil>
	W0917 02:10:29.485411    3047 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:10:29.489464    3047 out.go:177] * Restarting existing qemu2 VM for "ha-753000-m02" ...
	I0917 02:10:29.493499    3047 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:10:29.493555    3047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:35:0d:12:d5:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/disk.qcow2
	I0917 02:10:29.496446    3047 main.go:141] libmachine: STDOUT: 
	I0917 02:10:29.496464    3047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:10:29.496499    3047 fix.go:56] duration metric: took 11.223417ms for fixHost
	I0917 02:10:29.496509    3047 start.go:83] releasing machines lock for "ha-753000-m02", held for 11.241583ms
	W0917 02:10:29.496516    3047 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:10:29.496554    3047 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:10:29.496559    3047 start.go:729] Will try again in 5 seconds ...
	I0917 02:10:34.498542    3047 start.go:360] acquireMachinesLock for ha-753000-m02: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:10:34.499050    3047 start.go:364] duration metric: took 388.541µs to acquireMachinesLock for "ha-753000-m02"
	I0917 02:10:34.499200    3047 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:10:34.499221    3047 fix.go:54] fixHost starting: m02
	I0917 02:10:34.500062    3047 fix.go:112] recreateIfNeeded on ha-753000-m02: state=Stopped err=<nil>
	W0917 02:10:34.500089    3047 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:10:34.504182    3047 out.go:177] * Restarting existing qemu2 VM for "ha-753000-m02" ...
	I0917 02:10:34.508126    3047 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:10:34.508292    3047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:35:0d:12:d5:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/disk.qcow2
	I0917 02:10:34.517526    3047 main.go:141] libmachine: STDOUT: 
	I0917 02:10:34.517601    3047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:10:34.517698    3047 fix.go:56] duration metric: took 18.478625ms for fixHost
	I0917 02:10:34.517717    3047 start.go:83] releasing machines lock for "ha-753000-m02", held for 18.644833ms
	W0917 02:10:34.517879    3047 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-753000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-753000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:10:34.523115    3047 out.go:201] 
	W0917 02:10:34.527164    3047 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:10:34.527189    3047 out.go:270] * 
	* 
	W0917 02:10:34.535453    3047 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:10:34.540007    3047 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0917 02:10:29.474567    3047 out.go:345] Setting OutFile to fd 1 ...
I0917 02:10:29.474905    3047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 02:10:29.474916    3047 out.go:358] Setting ErrFile to fd 2...
I0917 02:10:29.474920    3047 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 02:10:29.475111    3047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
I0917 02:10:29.475448    3047 mustload.go:65] Loading cluster: ha-753000
I0917 02:10:29.475802    3047 config.go:182] Loaded profile config "ha-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0917 02:10:29.476116    3047 host.go:58] "ha-753000-m02" host status: Stopped
I0917 02:10:29.480588    3047 out.go:177] * Starting "ha-753000-m02" control-plane node in "ha-753000" cluster
I0917 02:10:29.484454    3047 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0917 02:10:29.484469    3047 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0917 02:10:29.484477    3047 cache.go:56] Caching tarball of preloaded images
I0917 02:10:29.484565    3047 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0917 02:10:29.484572    3047 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0917 02:10:29.484641    3047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/ha-753000/config.json ...
I0917 02:10:29.485178    3047 start.go:360] acquireMachinesLock for ha-753000-m02: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0917 02:10:29.485258    3047 start.go:364] duration metric: took 42.5µs to acquireMachinesLock for "ha-753000-m02"
I0917 02:10:29.485268    3047 start.go:96] Skipping create...Using existing machine configuration
I0917 02:10:29.485275    3047 fix.go:54] fixHost starting: m02
I0917 02:10:29.485404    3047 fix.go:112] recreateIfNeeded on ha-753000-m02: state=Stopped err=<nil>
W0917 02:10:29.485411    3047 fix.go:138] unexpected machine state, will restart: <nil>
I0917 02:10:29.489464    3047 out.go:177] * Restarting existing qemu2 VM for "ha-753000-m02" ...
I0917 02:10:29.493499    3047 qemu.go:418] Using hvf for hardware acceleration
I0917 02:10:29.493555    3047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:35:0d:12:d5:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/disk.qcow2
I0917 02:10:29.496446    3047 main.go:141] libmachine: STDOUT: 
I0917 02:10:29.496464    3047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0917 02:10:29.496499    3047 fix.go:56] duration metric: took 11.223417ms for fixHost
I0917 02:10:29.496509    3047 start.go:83] releasing machines lock for "ha-753000-m02", held for 11.241583ms
W0917 02:10:29.496516    3047 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0917 02:10:29.496554    3047 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0917 02:10:29.496559    3047 start.go:729] Will try again in 5 seconds ...
I0917 02:10:34.498542    3047 start.go:360] acquireMachinesLock for ha-753000-m02: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0917 02:10:34.499050    3047 start.go:364] duration metric: took 388.541µs to acquireMachinesLock for "ha-753000-m02"
I0917 02:10:34.499200    3047 start.go:96] Skipping create...Using existing machine configuration
I0917 02:10:34.499221    3047 fix.go:54] fixHost starting: m02
I0917 02:10:34.500062    3047 fix.go:112] recreateIfNeeded on ha-753000-m02: state=Stopped err=<nil>
W0917 02:10:34.500089    3047 fix.go:138] unexpected machine state, will restart: <nil>
I0917 02:10:34.504182    3047 out.go:177] * Restarting existing qemu2 VM for "ha-753000-m02" ...
I0917 02:10:34.508126    3047 qemu.go:418] Using hvf for hardware acceleration
I0917 02:10:34.508292    3047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:35:0d:12:d5:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/disk.qcow2
I0917 02:10:34.517526    3047 main.go:141] libmachine: STDOUT: 
I0917 02:10:34.517601    3047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0917 02:10:34.517698    3047 fix.go:56] duration metric: took 18.478625ms for fixHost
I0917 02:10:34.517717    3047 start.go:83] releasing machines lock for "ha-753000-m02", held for 18.644833ms
W0917 02:10:34.517879    3047 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-753000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-753000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0917 02:10:34.523115    3047 out.go:201] 
W0917 02:10:34.527164    3047 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0917 02:10:34.527189    3047 out.go:270] * 
* 
W0917 02:10:34.535453    3047 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0917 02:10:34.540007    3047 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-753000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr
E0917 02:11:18.070135    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:11:36.424078    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:12:41.162115    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr: exit status 7 (3m45.080565583s)

                                                
                                                
-- stdout --
	ha-753000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-753000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-753000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-753000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:10:34.610574    3051 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:10:34.610784    3051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:34.610790    3051 out.go:358] Setting ErrFile to fd 2...
	I0917 02:10:34.610794    3051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:10:34.610971    3051 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:10:34.611130    3051 out.go:352] Setting JSON to false
	I0917 02:10:34.611147    3051 mustload.go:65] Loading cluster: ha-753000
	I0917 02:10:34.611202    3051 notify.go:220] Checking for updates...
	I0917 02:10:34.611462    3051 config.go:182] Loaded profile config "ha-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:10:34.611470    3051 status.go:255] checking status of ha-753000 ...
	I0917 02:10:34.612448    3051 status.go:330] ha-753000 host status = "Running" (err=<nil>)
	I0917 02:10:34.612461    3051 host.go:66] Checking if "ha-753000" exists ...
	I0917 02:10:34.612596    3051 host.go:66] Checking if "ha-753000" exists ...
	I0917 02:10:34.612740    3051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:10:34.612753    3051 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000/id_rsa Username:docker}
	W0917 02:11:49.615040    3051 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0917 02:11:49.615267    3051 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0917 02:11:49.615299    3051 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0917 02:11:49.615314    3051 status.go:257] ha-753000 status: &{Name:ha-753000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 02:11:49.615350    3051 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0917 02:11:49.615368    3051 status.go:255] checking status of ha-753000-m02 ...
	I0917 02:11:49.616111    3051 status.go:330] ha-753000-m02 host status = "Stopped" (err=<nil>)
	I0917 02:11:49.616129    3051 status.go:343] host is not running, skipping remaining checks
	I0917 02:11:49.616138    3051 status.go:257] ha-753000-m02 status: &{Name:ha-753000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:11:49.616157    3051 status.go:255] checking status of ha-753000-m03 ...
	I0917 02:11:49.618214    3051 status.go:330] ha-753000-m03 host status = "Running" (err=<nil>)
	I0917 02:11:49.618236    3051 host.go:66] Checking if "ha-753000-m03" exists ...
	I0917 02:11:49.618618    3051 host.go:66] Checking if "ha-753000-m03" exists ...
	I0917 02:11:49.619115    3051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:11:49.619139    3051 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m03/id_rsa Username:docker}
	W0917 02:13:04.620927    3051 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0917 02:13:04.621095    3051 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0917 02:13:04.621127    3051 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0917 02:13:04.621143    3051 status.go:257] ha-753000-m03 status: &{Name:ha-753000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0917 02:13:04.621181    3051 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0917 02:13:04.621197    3051 status.go:255] checking status of ha-753000-m04 ...
	I0917 02:13:04.623772    3051 status.go:330] ha-753000-m04 host status = "Running" (err=<nil>)
	I0917 02:13:04.623801    3051 host.go:66] Checking if "ha-753000-m04" exists ...
	I0917 02:13:04.624307    3051 host.go:66] Checking if "ha-753000-m04" exists ...
	I0917 02:13:04.624760    3051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 02:13:04.624782    3051 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m04/id_rsa Username:docker}
	W0917 02:14:19.626902    3051 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0917 02:14:19.626965    3051 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0917 02:14:19.626975    3051 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0917 02:14:19.626979    3051 status.go:257] ha-753000-m04 status: &{Name:ha-753000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0917 02:14:19.626989    3051 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000: exit status 3 (1m15.039165542s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 02:15:34.664090    3070 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0917 02:15:34.664124    3070 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-753000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-753000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-753000 -v=7 --alsologtostderr
E0917 02:21:18.067105    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:21:36.421563    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-753000 -v=7 --alsologtostderr: (5m27.171057334s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-753000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-753000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.236013791s)

                                                
                                                
-- stdout --
	* [ha-753000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-753000" primary control-plane node in "ha-753000" cluster
	* Restarting existing qemu2 VM for "ha-753000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-753000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:23:32.053375    3143 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:23:32.053574    3143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:23:32.053579    3143 out.go:358] Setting ErrFile to fd 2...
	I0917 02:23:32.053582    3143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:23:32.053749    3143 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:23:32.055069    3143 out.go:352] Setting JSON to false
	I0917 02:23:32.076874    3143 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3182,"bootTime":1726561830,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:23:32.076972    3143 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:23:32.081922    3143 out.go:177] * [ha-753000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:23:32.090026    3143 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:23:32.090071    3143 notify.go:220] Checking for updates...
	I0917 02:23:32.097020    3143 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:23:32.100848    3143 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:23:32.104002    3143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:23:32.106974    3143 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:23:32.110006    3143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:23:32.113340    3143 config.go:182] Loaded profile config "ha-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:23:32.113402    3143 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:23:32.117915    3143 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:23:32.124934    3143 start.go:297] selected driver: qemu2
	I0917 02:23:32.124940    3143 start.go:901] validating driver "qemu2" against &{Name:ha-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-753000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:23:32.125023    3143 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:23:32.128106    3143 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:23:32.128135    3143 cni.go:84] Creating CNI manager for ""
	I0917 02:23:32.128173    3143 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0917 02:23:32.128236    3143 start.go:340] cluster config:
	{Name:ha-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-753000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:23:32.132786    3143 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:23:32.140882    3143 out.go:177] * Starting "ha-753000" primary control-plane node in "ha-753000" cluster
	I0917 02:23:32.144990    3143 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:23:32.145009    3143 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:23:32.145018    3143 cache.go:56] Caching tarball of preloaded images
	I0917 02:23:32.145101    3143 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:23:32.145107    3143 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:23:32.145177    3143 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/ha-753000/config.json ...
	I0917 02:23:32.145630    3143 start.go:360] acquireMachinesLock for ha-753000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:23:32.145670    3143 start.go:364] duration metric: took 33.083µs to acquireMachinesLock for "ha-753000"
	I0917 02:23:32.145680    3143 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:23:32.145685    3143 fix.go:54] fixHost starting: 
	I0917 02:23:32.145820    3143 fix.go:112] recreateIfNeeded on ha-753000: state=Stopped err=<nil>
	W0917 02:23:32.145828    3143 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:23:32.151021    3143 out.go:177] * Restarting existing qemu2 VM for "ha-753000" ...
	I0917 02:23:32.159049    3143 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:23:32.159096    3143 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:8f:49:4f:0f:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000/disk.qcow2
	I0917 02:23:32.161501    3143 main.go:141] libmachine: STDOUT: 
	I0917 02:23:32.161519    3143 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:23:32.161552    3143 fix.go:56] duration metric: took 15.866208ms for fixHost
	I0917 02:23:32.161559    3143 start.go:83] releasing machines lock for "ha-753000", held for 15.88375ms
	W0917 02:23:32.161564    3143 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:23:32.161598    3143 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:23:32.161607    3143 start.go:729] Will try again in 5 seconds ...
	I0917 02:23:37.163760    3143 start.go:360] acquireMachinesLock for ha-753000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:23:37.164148    3143 start.go:364] duration metric: took 321.042µs to acquireMachinesLock for "ha-753000"
	I0917 02:23:37.164273    3143 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:23:37.164292    3143 fix.go:54] fixHost starting: 
	I0917 02:23:37.165052    3143 fix.go:112] recreateIfNeeded on ha-753000: state=Stopped err=<nil>
	W0917 02:23:37.165077    3143 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:23:37.169364    3143 out.go:177] * Restarting existing qemu2 VM for "ha-753000" ...
	I0917 02:23:37.177432    3143 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:23:37.177593    3143 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:8f:49:4f:0f:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000/disk.qcow2
	I0917 02:23:37.186860    3143 main.go:141] libmachine: STDOUT: 
	I0917 02:23:37.186932    3143 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:23:37.187004    3143 fix.go:56] duration metric: took 22.710333ms for fixHost
	I0917 02:23:37.187031    3143 start.go:83] releasing machines lock for "ha-753000", held for 22.856792ms
	W0917 02:23:37.187210    3143 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-753000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-753000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:23:37.195445    3143 out.go:201] 
	W0917 02:23:37.199475    3143 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:23:37.199519    3143 out.go:270] * 
	* 
	W0917 02:23:37.202127    3143 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:23:37.213453    3143 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-753000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-753000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000: exit status 7 (33.194833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-753000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-753000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.144916ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-753000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-753000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:23:37.354849    3156 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:23:37.355093    3156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:23:37.355096    3156 out.go:358] Setting ErrFile to fd 2...
	I0917 02:23:37.355099    3156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:23:37.355229    3156 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:23:37.355475    3156 mustload.go:65] Loading cluster: ha-753000
	I0917 02:23:37.355722    3156 config.go:182] Loaded profile config "ha-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0917 02:23:37.356059    3156 out.go:270] ! The control-plane node ha-753000 host is not running (will try others): state=Stopped
	! The control-plane node ha-753000 host is not running (will try others): state=Stopped
	W0917 02:23:37.356162    3156 out.go:270] ! The control-plane node ha-753000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-753000-m02 host is not running (will try others): state=Stopped
	I0917 02:23:37.359537    3156 out.go:177] * The control-plane node ha-753000-m03 host is not running: state=Stopped
	I0917 02:23:37.362398    3156 out.go:177]   To start a cluster, run: "minikube start -p ha-753000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-753000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr: exit status 7 (31.127584ms)

                                                
                                                
-- stdout --
	ha-753000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-753000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-753000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-753000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:23:37.395349    3158 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:23:37.395503    3158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:23:37.395506    3158 out.go:358] Setting ErrFile to fd 2...
	I0917 02:23:37.395509    3158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:23:37.395634    3158 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:23:37.395769    3158 out.go:352] Setting JSON to false
	I0917 02:23:37.395779    3158 mustload.go:65] Loading cluster: ha-753000
	I0917 02:23:37.395836    3158 notify.go:220] Checking for updates...
	I0917 02:23:37.396024    3158 config.go:182] Loaded profile config "ha-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:23:37.396034    3158 status.go:255] checking status of ha-753000 ...
	I0917 02:23:37.396286    3158 status.go:330] ha-753000 host status = "Stopped" (err=<nil>)
	I0917 02:23:37.396290    3158 status.go:343] host is not running, skipping remaining checks
	I0917 02:23:37.396292    3158 status.go:257] ha-753000 status: &{Name:ha-753000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:23:37.396302    3158 status.go:255] checking status of ha-753000-m02 ...
	I0917 02:23:37.396389    3158 status.go:330] ha-753000-m02 host status = "Stopped" (err=<nil>)
	I0917 02:23:37.396392    3158 status.go:343] host is not running, skipping remaining checks
	I0917 02:23:37.396394    3158 status.go:257] ha-753000-m02 status: &{Name:ha-753000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:23:37.396397    3158 status.go:255] checking status of ha-753000-m03 ...
	I0917 02:23:37.396485    3158 status.go:330] ha-753000-m03 host status = "Stopped" (err=<nil>)
	I0917 02:23:37.396488    3158 status.go:343] host is not running, skipping remaining checks
	I0917 02:23:37.396490    3158 status.go:257] ha-753000-m03 status: &{Name:ha-753000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 02:23:37.396493    3158 status.go:255] checking status of ha-753000-m04 ...
	I0917 02:23:37.396593    3158 status.go:330] ha-753000-m04 host status = "Stopped" (err=<nil>)
	I0917 02:23:37.396596    3158 status.go:343] host is not running, skipping remaining checks
	I0917 02:23:37.396598    3158 status.go:257] ha-753000-m04 status: &{Name:ha-753000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000: exit status 7 (30.441291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-753000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-753000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-753000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-753000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-753000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000: exit status 7 (29.659458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-753000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (234.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 stop -v=7 --alsologtostderr
E0917 02:26:18.066476    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:26:36.420446    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-753000 stop -v=7 --alsologtostderr: signal: killed (3m53.946150083s)

                                                
                                                
-- stdout --
	* Stopping node "ha-753000-m04"  ...
	* Stopping node "ha-753000-m03"  ...
	* Stopping node "ha-753000-m02"  ...
	* Stopping node "ha-753000"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:23:37.534786    3167 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:23:37.535165    3167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:23:37.535170    3167 out.go:358] Setting ErrFile to fd 2...
	I0917 02:23:37.535172    3167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:23:37.535353    3167 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:23:37.535618    3167 out.go:352] Setting JSON to false
	I0917 02:23:37.535718    3167 mustload.go:65] Loading cluster: ha-753000
	I0917 02:23:37.536056    3167 config.go:182] Loaded profile config "ha-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:23:37.536114    3167 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/ha-753000/config.json ...
	I0917 02:23:37.536378    3167 mustload.go:65] Loading cluster: ha-753000
	I0917 02:23:37.536463    3167 config.go:182] Loaded profile config "ha-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:23:37.536480    3167 stop.go:39] StopHost: ha-753000-m04
	I0917 02:23:37.541386    3167 out.go:177] * Stopping node "ha-753000-m04"  ...
	I0917 02:23:37.552457    3167 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0917 02:23:37.552506    3167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0917 02:23:37.552517    3167 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m04/id_rsa Username:docker}
	W0917 02:24:52.553883    3167 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0917 02:24:52.554143    3167 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0917 02:24:52.554294    3167 main.go:141] libmachine: Stopping "ha-753000-m04"...
	I0917 02:24:52.554439    3167 stop.go:66] stop err: Machine "ha-753000-m04" is already stopped.
	I0917 02:24:52.554468    3167 stop.go:69] host is already stopped
	I0917 02:24:52.554495    3167 stop.go:39] StopHost: ha-753000-m03
	I0917 02:24:52.559231    3167 out.go:177] * Stopping node "ha-753000-m03"  ...
	I0917 02:24:52.566350    3167 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0917 02:24:52.566516    3167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0917 02:24:52.566569    3167 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m03/id_rsa Username:docker}
	W0917 02:26:07.569119    3167 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0917 02:26:07.569339    3167 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0917 02:26:07.569406    3167 main.go:141] libmachine: Stopping "ha-753000-m03"...
	I0917 02:26:07.569553    3167 stop.go:66] stop err: Machine "ha-753000-m03" is already stopped.
	I0917 02:26:07.569583    3167 stop.go:69] host is already stopped
	I0917 02:26:07.569614    3167 stop.go:39] StopHost: ha-753000-m02
	I0917 02:26:07.579735    3167 out.go:177] * Stopping node "ha-753000-m02"  ...
	I0917 02:26:07.582767    3167 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0917 02:26:07.583422    3167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0917 02:26:07.583458    3167 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000-m02/id_rsa Username:docker}
	W0917 02:27:22.585732    3167 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.6:22: connect: operation timed out
	W0917 02:27:22.585931    3167 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.6:22: connect: operation timed out
	I0917 02:27:22.586017    3167 main.go:141] libmachine: Stopping "ha-753000-m02"...
	I0917 02:27:22.586176    3167 stop.go:66] stop err: Machine "ha-753000-m02" is already stopped.
	I0917 02:27:22.586203    3167 stop.go:69] host is already stopped
	I0917 02:27:22.586229    3167 stop.go:39] StopHost: ha-753000
	I0917 02:27:22.591488    3167 out.go:177] * Stopping node "ha-753000"  ...
	I0917 02:27:22.598341    3167 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0917 02:27:22.598504    3167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0917 02:27:22.598537    3167 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/ha-753000/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-753000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr: context deadline exceeded (2.375µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-753000 -n ha-753000: exit status 7 (71.061541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-753000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (234.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-441000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-441000 --driver=qemu2 : exit status 80 (10.064119375s)

                                                
                                                
-- stdout --
	* [image-441000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-441000" primary control-plane node in "image-441000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-441000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-441000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-441000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-441000 -n image-441000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-441000 -n image-441000: exit status 7 (68.296083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-441000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.13s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-570000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-570000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.797225042s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a3685b9d-547a-46a4-a228-9bba9f87f208","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-570000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a04a2d21-8a91-4d6c-8e7d-8f7cdf649ccf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"f6d0018f-be30-4504-9eee-499451723827","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig"}}
	{"specversion":"1.0","id":"3d7a65c2-6e4c-4aca-a88f-9de820223b5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4468106d-c5ef-4ee8-a122-273b09d67c7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e9008b21-e988-4b95-9fa0-a3b9f983a06b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube"}}
	{"specversion":"1.0","id":"a37cd696-8b70-4fc1-9c0a-d9cf70c060ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"96a4aa8d-5594-484f-9404-6c233cd891e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"469ba78a-a429-40b5-b31e-1577204935b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"89d8e4bf-2cbf-4afc-9389-281b4df5e3cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-570000\" primary control-plane node in \"json-output-570000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a81155e7-d54a-4613-96fe-b9c9c011349f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c521e4da-3998-40d0-8edd-7de5423cbc07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-570000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"b70cdeca-656d-46c9-8292-2d44c2075baa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"79b38635-b671-43c8-a8cb-37102b5beb75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"445db53c-15dc-49b9-b8f5-cb5a174534a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-570000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"9cb83e2c-4836-4d3f-9bcb-aa6f9526ce0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"e823f52e-c3ba-43a4-aa0c-0fa045783a86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-570000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-570000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-570000 --output=json --user=testUser: exit status 83 (77.063875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ec71574-ae12-453a-8c3a-247f22c836db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-570000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"d0e0ba98-e773-4c06-b94d-e29b7483ba10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-570000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-570000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-570000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-570000 --output=json --user=testUser: exit status 83 (43.847167ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-570000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-570000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-570000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-570000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-100000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-100000 --driver=qemu2 : exit status 80 (9.894348125s)

                                                
                                                
-- stdout --
	* [first-100000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-100000" primary control-plane node in "first-100000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-100000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-100000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-100000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-17 02:28:04.494958 -0700 PDT m=+3039.778817834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-102000 -n second-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-102000 -n second-102000: exit status 85 (78.259208ms)

                                                
                                                
-- stdout --
	* Profile "second-102000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-102000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-102000" host is not running, skipping log retrieval (state="* Profile \"second-102000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-102000\"")
helpers_test.go:175: Cleaning up "second-102000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-102000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-17 02:28:04.684346 -0700 PDT m=+3039.968206209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-100000 -n first-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-100000 -n first-100000: exit status 7 (30.83525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-100000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-100000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-100000
--- FAIL: TestMinikubeProfile (10.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-390000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-390000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.976468917s)

                                                
                                                
-- stdout --
	* [mount-start-1-390000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-390000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-390000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-390000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-390000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-390000 -n mount-start-1-390000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-390000 -n mount-start-1-390000: exit status 7 (72.484709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-390000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-661000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-661000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.178331333s)

                                                
                                                
-- stdout --
	* [multinode-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-661000" primary control-plane node in "multinode-661000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-661000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:28:15.058355    3318 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:28:15.058472    3318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:28:15.058476    3318 out.go:358] Setting ErrFile to fd 2...
	I0917 02:28:15.058482    3318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:28:15.058617    3318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:28:15.059662    3318 out.go:352] Setting JSON to false
	I0917 02:28:15.075732    3318 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3465,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:28:15.075809    3318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:28:15.083005    3318 out.go:177] * [multinode-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:28:15.090983    3318 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:28:15.091034    3318 notify.go:220] Checking for updates...
	I0917 02:28:15.099076    3318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:28:15.101950    3318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:28:15.105013    3318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:28:15.108017    3318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:28:15.110974    3318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:28:15.114079    3318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:28:15.117947    3318 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:28:15.124942    3318 start.go:297] selected driver: qemu2
	I0917 02:28:15.124949    3318 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:28:15.124957    3318 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:28:15.127358    3318 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:28:15.129965    3318 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:28:15.133951    3318 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:28:15.133968    3318 cni.go:84] Creating CNI manager for ""
	I0917 02:28:15.133988    3318 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0917 02:28:15.133992    3318 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 02:28:15.134017    3318 start.go:340] cluster config:
	{Name:multinode-661000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:28:15.137701    3318 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:28:15.144822    3318 out.go:177] * Starting "multinode-661000" primary control-plane node in "multinode-661000" cluster
	I0917 02:28:15.149003    3318 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:28:15.149022    3318 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:28:15.149032    3318 cache.go:56] Caching tarball of preloaded images
	I0917 02:28:15.149106    3318 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:28:15.149119    3318 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:28:15.149348    3318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/multinode-661000/config.json ...
	I0917 02:28:15.149360    3318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/multinode-661000/config.json: {Name:mk179341dfb5c2f9606c0e2d3883d65797af9e56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:28:15.149590    3318 start.go:360] acquireMachinesLock for multinode-661000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:28:15.149628    3318 start.go:364] duration metric: took 32µs to acquireMachinesLock for "multinode-661000"
	I0917 02:28:15.149640    3318 start.go:93] Provisioning new machine with config: &{Name:multinode-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:28:15.149667    3318 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:28:15.155950    3318 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:28:15.174590    3318 start.go:159] libmachine.API.Create for "multinode-661000" (driver="qemu2")
	I0917 02:28:15.174625    3318 client.go:168] LocalClient.Create starting
	I0917 02:28:15.174694    3318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:28:15.174725    3318 main.go:141] libmachine: Decoding PEM data...
	I0917 02:28:15.174738    3318 main.go:141] libmachine: Parsing certificate...
	I0917 02:28:15.174776    3318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:28:15.174801    3318 main.go:141] libmachine: Decoding PEM data...
	I0917 02:28:15.174813    3318 main.go:141] libmachine: Parsing certificate...
	I0917 02:28:15.175228    3318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:28:15.326495    3318 main.go:141] libmachine: Creating SSH key...
	I0917 02:28:15.636729    3318 main.go:141] libmachine: Creating Disk image...
	I0917 02:28:15.636739    3318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:28:15.637017    3318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2
	I0917 02:28:15.647049    3318 main.go:141] libmachine: STDOUT: 
	I0917 02:28:15.647064    3318 main.go:141] libmachine: STDERR: 
	I0917 02:28:15.647123    3318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2 +20000M
	I0917 02:28:15.655161    3318 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:28:15.655176    3318 main.go:141] libmachine: STDERR: 
	I0917 02:28:15.655187    3318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2
	I0917 02:28:15.655202    3318 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:28:15.655213    3318 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:28:15.655245    3318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:e9:0e:c3:1a:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2
	I0917 02:28:15.656892    3318 main.go:141] libmachine: STDOUT: 
	I0917 02:28:15.656907    3318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:28:15.656936    3318 client.go:171] duration metric: took 482.308167ms to LocalClient.Create
	I0917 02:28:17.659111    3318 start.go:128] duration metric: took 2.509434416s to createHost
	I0917 02:28:17.659176    3318 start.go:83] releasing machines lock for "multinode-661000", held for 2.509549833s
	W0917 02:28:17.659228    3318 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:28:17.672404    3318 out.go:177] * Deleting "multinode-661000" in qemu2 ...
	W0917 02:28:17.702426    3318 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:28:17.702446    3318 start.go:729] Will try again in 5 seconds ...
	I0917 02:28:22.704622    3318 start.go:360] acquireMachinesLock for multinode-661000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:28:22.705092    3318 start.go:364] duration metric: took 365µs to acquireMachinesLock for "multinode-661000"
	I0917 02:28:22.705213    3318 start.go:93] Provisioning new machine with config: &{Name:multinode-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:28:22.705500    3318 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:28:22.723038    3318 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:28:22.773333    3318 start.go:159] libmachine.API.Create for "multinode-661000" (driver="qemu2")
	I0917 02:28:22.773389    3318 client.go:168] LocalClient.Create starting
	I0917 02:28:22.773516    3318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:28:22.773582    3318 main.go:141] libmachine: Decoding PEM data...
	I0917 02:28:22.773597    3318 main.go:141] libmachine: Parsing certificate...
	I0917 02:28:22.773658    3318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:28:22.773702    3318 main.go:141] libmachine: Decoding PEM data...
	I0917 02:28:22.773714    3318 main.go:141] libmachine: Parsing certificate...
	I0917 02:28:22.774272    3318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:28:22.935717    3318 main.go:141] libmachine: Creating SSH key...
	I0917 02:28:23.141788    3318 main.go:141] libmachine: Creating Disk image...
	I0917 02:28:23.141799    3318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:28:23.142019    3318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2
	I0917 02:28:23.151429    3318 main.go:141] libmachine: STDOUT: 
	I0917 02:28:23.151450    3318 main.go:141] libmachine: STDERR: 
	I0917 02:28:23.151519    3318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2 +20000M
	I0917 02:28:23.159505    3318 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:28:23.159546    3318 main.go:141] libmachine: STDERR: 
	I0917 02:28:23.159557    3318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2
	I0917 02:28:23.159562    3318 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:28:23.159571    3318 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:28:23.159608    3318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:37:cc:ca:3a:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2
	I0917 02:28:23.161307    3318 main.go:141] libmachine: STDOUT: 
	I0917 02:28:23.161319    3318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:28:23.161334    3318 client.go:171] duration metric: took 387.942333ms to LocalClient.Create
	I0917 02:28:25.163514    3318 start.go:128] duration metric: took 2.45799625s to createHost
	I0917 02:28:25.163569    3318 start.go:83] releasing machines lock for "multinode-661000", held for 2.4584615s
	W0917 02:28:25.163937    3318 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-661000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-661000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:28:25.177571    3318 out.go:201] 
	W0917 02:28:25.181592    3318 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:28:25.181618    3318 out.go:270] * 
	* 
	W0917 02:28:25.184445    3318 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:28:25.193546    3318 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-661000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (66.66975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (80.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.26775ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-661000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- rollout status deployment/busybox: exit status 1 (58.817083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.24775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.242834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.972083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.241667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.71825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.593208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.035458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.559625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.318292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0917 02:29:21.161570    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.682042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.396459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.301708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.897917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.614041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (30.441291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (80.64s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-661000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.75675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (30.6265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-661000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-661000 -v 3 --alsologtostderr: exit status 83 (41.958959ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-661000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-661000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:29:46.035665    3395 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:29:46.035848    3395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:46.035851    3395 out.go:358] Setting ErrFile to fd 2...
	I0917 02:29:46.035854    3395 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:46.035982    3395 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:29:46.036221    3395 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:29:46.036434    3395 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:29:46.041606    3395 out.go:177] * The control-plane node multinode-661000 host is not running: state=Stopped
	I0917 02:29:46.045499    3395 out.go:177]   To start a cluster, run: "minikube start -p multinode-661000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-661000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (30.51475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-661000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-661000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.256958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-661000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-661000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-661000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (30.326833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-661000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-661000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-661000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-661000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (30.395125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status --output json --alsologtostderr: exit status 7 (29.733ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-661000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:29:46.252163    3407 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:29:46.252325    3407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:46.252328    3407 out.go:358] Setting ErrFile to fd 2...
	I0917 02:29:46.252331    3407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:46.252451    3407 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:29:46.252569    3407 out.go:352] Setting JSON to true
	I0917 02:29:46.252583    3407 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:29:46.252643    3407 notify.go:220] Checking for updates...
	I0917 02:29:46.252775    3407 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:29:46.252781    3407 status.go:255] checking status of multinode-661000 ...
	I0917 02:29:46.253021    3407 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:29:46.253024    3407 status.go:343] host is not running, skipping remaining checks
	I0917 02:29:46.253027    3407 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-661000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (30.325625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 node stop m03: exit status 85 (48.503125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-661000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status: exit status 7 (30.728417ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status --alsologtostderr: exit status 7 (30.245666ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:29:46.392717    3415 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:29:46.392869    3415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:46.392873    3415 out.go:358] Setting ErrFile to fd 2...
	I0917 02:29:46.392875    3415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:46.393043    3415 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:29:46.393159    3415 out.go:352] Setting JSON to false
	I0917 02:29:46.393169    3415 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:29:46.393236    3415 notify.go:220] Checking for updates...
	I0917 02:29:46.393397    3415 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:29:46.393407    3415 status.go:255] checking status of multinode-661000 ...
	I0917 02:29:46.393650    3415 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:29:46.393654    3415 status.go:343] host is not running, skipping remaining checks
	I0917 02:29:46.393656    3415 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-661000 status --alsologtostderr": multinode-661000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (30.645792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.874875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:29:46.454064    3419 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:29:46.454336    3419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:46.454339    3419 out.go:358] Setting ErrFile to fd 2...
	I0917 02:29:46.454342    3419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:46.454477    3419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:29:46.454706    3419 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:29:46.454893    3419 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:29:46.458494    3419 out.go:201] 
	W0917 02:29:46.461518    3419 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0917 02:29:46.461523    3419 out.go:270] * 
	* 
	W0917 02:29:46.463293    3419 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:29:46.466447    3419 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0917 02:29:46.454064    3419 out.go:345] Setting OutFile to fd 1 ...
I0917 02:29:46.454336    3419 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 02:29:46.454339    3419 out.go:358] Setting ErrFile to fd 2...
I0917 02:29:46.454342    3419 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 02:29:46.454477    3419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
I0917 02:29:46.454706    3419 mustload.go:65] Loading cluster: multinode-661000
I0917 02:29:46.454893    3419 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 02:29:46.458494    3419 out.go:201] 
W0917 02:29:46.461518    3419 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0917 02:29:46.461523    3419 out.go:270] * 
* 
W0917 02:29:46.463293    3419 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0917 02:29:46.466447    3419 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-661000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr: exit status 7 (30.144458ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:29:46.498853    3421 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:29:46.498999    3421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:46.499002    3421 out.go:358] Setting ErrFile to fd 2...
	I0917 02:29:46.499005    3421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:46.499141    3421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:29:46.499275    3421 out.go:352] Setting JSON to false
	I0917 02:29:46.499289    3421 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:29:46.499335    3421 notify.go:220] Checking for updates...
	I0917 02:29:46.499514    3421 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:29:46.499521    3421 status.go:255] checking status of multinode-661000 ...
	I0917 02:29:46.499753    3421 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:29:46.499756    3421 status.go:343] host is not running, skipping remaining checks
	I0917 02:29:46.499758    3421 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr: exit status 7 (74.598667ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:29:47.454741    3423 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:29:47.454979    3423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:47.454983    3423 out.go:358] Setting ErrFile to fd 2...
	I0917 02:29:47.454987    3423 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:47.455150    3423 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:29:47.455311    3423 out.go:352] Setting JSON to false
	I0917 02:29:47.455326    3423 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:29:47.455373    3423 notify.go:220] Checking for updates...
	I0917 02:29:47.455603    3423 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:29:47.455615    3423 status.go:255] checking status of multinode-661000 ...
	I0917 02:29:47.455936    3423 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:29:47.455940    3423 status.go:343] host is not running, skipping remaining checks
	I0917 02:29:47.455943    3423 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr: exit status 7 (73.953333ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:29:49.132961    3425 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:29:49.133149    3425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:49.133154    3425 out.go:358] Setting ErrFile to fd 2...
	I0917 02:29:49.133157    3425 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:49.133337    3425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:29:49.133511    3425 out.go:352] Setting JSON to false
	I0917 02:29:49.133524    3425 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:29:49.133570    3425 notify.go:220] Checking for updates...
	I0917 02:29:49.133811    3425 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:29:49.133820    3425 status.go:255] checking status of multinode-661000 ...
	I0917 02:29:49.134134    3425 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:29:49.134139    3425 status.go:343] host is not running, skipping remaining checks
	I0917 02:29:49.134142    3425 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr: exit status 7 (73.814375ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:29:51.274284    3429 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:29:51.274527    3429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:51.274532    3429 out.go:358] Setting ErrFile to fd 2...
	I0917 02:29:51.274535    3429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:51.274707    3429 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:29:51.274884    3429 out.go:352] Setting JSON to false
	I0917 02:29:51.274898    3429 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:29:51.274938    3429 notify.go:220] Checking for updates...
	I0917 02:29:51.275200    3429 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:29:51.275209    3429 status.go:255] checking status of multinode-661000 ...
	I0917 02:29:51.275538    3429 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:29:51.275542    3429 status.go:343] host is not running, skipping remaining checks
	I0917 02:29:51.275545    3429 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr: exit status 7 (74.344416ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:29:54.287873    3431 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:29:54.288093    3431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:54.288098    3431 out.go:358] Setting ErrFile to fd 2...
	I0917 02:29:54.288101    3431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:54.288299    3431 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:29:54.288461    3431 out.go:352] Setting JSON to false
	I0917 02:29:54.288474    3431 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:29:54.288512    3431 notify.go:220] Checking for updates...
	I0917 02:29:54.288794    3431 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:29:54.288805    3431 status.go:255] checking status of multinode-661000 ...
	I0917 02:29:54.289127    3431 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:29:54.289133    3431 status.go:343] host is not running, skipping remaining checks
	I0917 02:29:54.289136    3431 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr: exit status 7 (74.456333ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:29:58.668905    3433 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:29:58.669111    3433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:58.669116    3433 out.go:358] Setting ErrFile to fd 2...
	I0917 02:29:58.669119    3433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:29:58.669296    3433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:29:58.669456    3433 out.go:352] Setting JSON to false
	I0917 02:29:58.669469    3433 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:29:58.669517    3433 notify.go:220] Checking for updates...
	I0917 02:29:58.669758    3433 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:29:58.669767    3433 status.go:255] checking status of multinode-661000 ...
	I0917 02:29:58.670054    3433 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:29:58.670059    3433 status.go:343] host is not running, skipping remaining checks
	I0917 02:29:58.670062    3433 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr: exit status 7 (73.924125ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:30:08.019472    3737 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:30:08.019735    3737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:08.019739    3737 out.go:358] Setting ErrFile to fd 2...
	I0917 02:30:08.019743    3737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:08.019912    3737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:30:08.020087    3737 out.go:352] Setting JSON to false
	I0917 02:30:08.020101    3737 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:30:08.020144    3737 notify.go:220] Checking for updates...
	I0917 02:30:08.020372    3737 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:30:08.020383    3737 status.go:255] checking status of multinode-661000 ...
	I0917 02:30:08.020746    3737 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:30:08.020751    3737 status.go:343] host is not running, skipping remaining checks
	I0917 02:30:08.020754    3737 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr: exit status 7 (73.688375ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:30:24.757012    3739 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:30:24.757225    3739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:24.757229    3739 out.go:358] Setting ErrFile to fd 2...
	I0917 02:30:24.757232    3739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:24.757390    3739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:30:24.757563    3739 out.go:352] Setting JSON to false
	I0917 02:30:24.757575    3739 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:30:24.757624    3739 notify.go:220] Checking for updates...
	I0917 02:30:24.757846    3739 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:30:24.757855    3739 status.go:255] checking status of multinode-661000 ...
	I0917 02:30:24.758158    3739 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:30:24.758163    3739 status.go:343] host is not running, skipping remaining checks
	I0917 02:30:24.758166    3739 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-661000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (33.55625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (38.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-661000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-661000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-661000: (3.536724s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-661000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-661000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.226646833s)

                                                
                                                
-- stdout --
	* [multinode-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-661000" primary control-plane node in "multinode-661000" cluster
	* Restarting existing qemu2 VM for "multinode-661000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-661000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:30:28.422986    3763 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:30:28.423151    3763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:28.423156    3763 out.go:358] Setting ErrFile to fd 2...
	I0917 02:30:28.423159    3763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:28.423316    3763 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:30:28.424500    3763 out.go:352] Setting JSON to false
	I0917 02:30:28.443798    3763 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3598,"bootTime":1726561830,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:30:28.443873    3763 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:30:28.448538    3763 out.go:177] * [multinode-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:30:28.454481    3763 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:30:28.454516    3763 notify.go:220] Checking for updates...
	I0917 02:30:28.462513    3763 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:30:28.465474    3763 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:30:28.469531    3763 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:30:28.472499    3763 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:30:28.475412    3763 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:30:28.478696    3763 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:30:28.478745    3763 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:30:28.483467    3763 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:30:28.490415    3763 start.go:297] selected driver: qemu2
	I0917 02:30:28.490421    3763 start.go:901] validating driver "qemu2" against &{Name:multinode-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:30:28.490504    3763 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:30:28.493011    3763 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:30:28.493037    3763 cni.go:84] Creating CNI manager for ""
	I0917 02:30:28.493068    3763 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 02:30:28.493128    3763 start.go:340] cluster config:
	{Name:multinode-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-661000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:30:28.496950    3763 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:30:28.505593    3763 out.go:177] * Starting "multinode-661000" primary control-plane node in "multinode-661000" cluster
	I0917 02:30:28.509432    3763 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:30:28.509447    3763 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:30:28.509453    3763 cache.go:56] Caching tarball of preloaded images
	I0917 02:30:28.509515    3763 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:30:28.509521    3763 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:30:28.509573    3763 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/multinode-661000/config.json ...
	I0917 02:30:28.510023    3763 start.go:360] acquireMachinesLock for multinode-661000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:30:28.510062    3763 start.go:364] duration metric: took 32.833µs to acquireMachinesLock for "multinode-661000"
	I0917 02:30:28.510072    3763 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:30:28.510077    3763 fix.go:54] fixHost starting: 
	I0917 02:30:28.510216    3763 fix.go:112] recreateIfNeeded on multinode-661000: state=Stopped err=<nil>
	W0917 02:30:28.510225    3763 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:30:28.518452    3763 out.go:177] * Restarting existing qemu2 VM for "multinode-661000" ...
	I0917 02:30:28.522316    3763 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:30:28.522356    3763 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:37:cc:ca:3a:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2
	I0917 02:30:28.524652    3763 main.go:141] libmachine: STDOUT: 
	I0917 02:30:28.524676    3763 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:30:28.524708    3763 fix.go:56] duration metric: took 14.62975ms for fixHost
	I0917 02:30:28.524712    3763 start.go:83] releasing machines lock for "multinode-661000", held for 14.645167ms
	W0917 02:30:28.524719    3763 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:30:28.524762    3763 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:30:28.524767    3763 start.go:729] Will try again in 5 seconds ...
	I0917 02:30:33.527043    3763 start.go:360] acquireMachinesLock for multinode-661000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:30:33.527523    3763 start.go:364] duration metric: took 360.292µs to acquireMachinesLock for "multinode-661000"
	I0917 02:30:33.527658    3763 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:30:33.527678    3763 fix.go:54] fixHost starting: 
	I0917 02:30:33.528404    3763 fix.go:112] recreateIfNeeded on multinode-661000: state=Stopped err=<nil>
	W0917 02:30:33.528432    3763 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:30:33.532936    3763 out.go:177] * Restarting existing qemu2 VM for "multinode-661000" ...
	I0917 02:30:33.540876    3763 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:30:33.541082    3763 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:37:cc:ca:3a:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2
	I0917 02:30:33.550596    3763 main.go:141] libmachine: STDOUT: 
	I0917 02:30:33.550665    3763 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:30:33.550762    3763 fix.go:56] duration metric: took 23.082625ms for fixHost
	I0917 02:30:33.550788    3763 start.go:83] releasing machines lock for "multinode-661000", held for 23.240542ms
	W0917 02:30:33.550974    3763 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-661000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-661000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:30:33.559925    3763 out.go:201] 
	W0917 02:30:33.563998    3763 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:30:33.564025    3763 out.go:270] * 
	* 
	W0917 02:30:33.566678    3763 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:30:33.573941    3763 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-661000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-661000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (33.462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 node delete m03: exit status 83 (39.992167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-661000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-661000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-661000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status --alsologtostderr: exit status 7 (30.639333ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:30:33.759340    3780 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:30:33.759506    3780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:33.759509    3780 out.go:358] Setting ErrFile to fd 2...
	I0917 02:30:33.759512    3780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:33.759637    3780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:30:33.759753    3780 out.go:352] Setting JSON to false
	I0917 02:30:33.759763    3780 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:30:33.759816    3780 notify.go:220] Checking for updates...
	I0917 02:30:33.759980    3780 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:30:33.759990    3780 status.go:255] checking status of multinode-661000 ...
	I0917 02:30:33.760216    3780 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:30:33.760220    3780 status.go:343] host is not running, skipping remaining checks
	I0917 02:30:33.760222    3780 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-661000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (32.286125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-661000 stop: (1.983980792s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status: exit status 7 (69.725916ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-661000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-661000 status --alsologtostderr: exit status 7 (33.26075ms)

                                                
                                                
-- stdout --
	multinode-661000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:30:35.879349    3796 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:30:35.879496    3796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:35.879499    3796 out.go:358] Setting ErrFile to fd 2...
	I0917 02:30:35.879501    3796 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:35.879633    3796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:30:35.879762    3796 out.go:352] Setting JSON to false
	I0917 02:30:35.879772    3796 mustload.go:65] Loading cluster: multinode-661000
	I0917 02:30:35.879833    3796 notify.go:220] Checking for updates...
	I0917 02:30:35.879975    3796 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:30:35.879984    3796 status.go:255] checking status of multinode-661000 ...
	I0917 02:30:35.880229    3796 status.go:330] multinode-661000 host status = "Stopped" (err=<nil>)
	I0917 02:30:35.880233    3796 status.go:343] host is not running, skipping remaining checks
	I0917 02:30:35.880235    3796 status.go:257] multinode-661000 status: &{Name:multinode-661000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-661000 status --alsologtostderr": multinode-661000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-661000 status --alsologtostderr": multinode-661000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (30.195375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-661000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-661000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.18446575s)

                                                
                                                
-- stdout --
	* [multinode-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-661000" primary control-plane node in "multinode-661000" cluster
	* Restarting existing qemu2 VM for "multinode-661000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-661000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:30:35.939354    3800 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:30:35.939491    3800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:35.939494    3800 out.go:358] Setting ErrFile to fd 2...
	I0917 02:30:35.939497    3800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:30:35.939632    3800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:30:35.940687    3800 out.go:352] Setting JSON to false
	I0917 02:30:35.956696    3800 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3605,"bootTime":1726561830,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:30:35.956768    3800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:30:35.960753    3800 out.go:177] * [multinode-661000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:30:35.967620    3800 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:30:35.967669    3800 notify.go:220] Checking for updates...
	I0917 02:30:35.973665    3800 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:30:35.976608    3800 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:30:35.979559    3800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:30:35.982569    3800 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:30:35.985546    3800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:30:35.988879    3800 config.go:182] Loaded profile config "multinode-661000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:30:35.989149    3800 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:30:35.992473    3800 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:30:35.999510    3800 start.go:297] selected driver: qemu2
	I0917 02:30:35.999515    3800 start.go:901] validating driver "qemu2" against &{Name:multinode-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-661000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:30:35.999562    3800 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:30:36.001729    3800 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:30:36.001757    3800 cni.go:84] Creating CNI manager for ""
	I0917 02:30:36.001780    3800 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0917 02:30:36.001836    3800 start.go:340] cluster config:
	{Name:multinode-661000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-661000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:30:36.005273    3800 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:30:36.012520    3800 out.go:177] * Starting "multinode-661000" primary control-plane node in "multinode-661000" cluster
	I0917 02:30:36.016609    3800 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:30:36.016633    3800 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:30:36.016639    3800 cache.go:56] Caching tarball of preloaded images
	I0917 02:30:36.016711    3800 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:30:36.016718    3800 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:30:36.016780    3800 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/multinode-661000/config.json ...
	I0917 02:30:36.017236    3800 start.go:360] acquireMachinesLock for multinode-661000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:30:36.017267    3800 start.go:364] duration metric: took 24.958µs to acquireMachinesLock for "multinode-661000"
	I0917 02:30:36.017276    3800 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:30:36.017283    3800 fix.go:54] fixHost starting: 
	I0917 02:30:36.017403    3800 fix.go:112] recreateIfNeeded on multinode-661000: state=Stopped err=<nil>
	W0917 02:30:36.017412    3800 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:30:36.025640    3800 out.go:177] * Restarting existing qemu2 VM for "multinode-661000" ...
	I0917 02:30:36.029553    3800 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:30:36.029595    3800 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:37:cc:ca:3a:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2
	I0917 02:30:36.031743    3800 main.go:141] libmachine: STDOUT: 
	I0917 02:30:36.031761    3800 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:30:36.031792    3800 fix.go:56] duration metric: took 14.50975ms for fixHost
	I0917 02:30:36.031796    3800 start.go:83] releasing machines lock for "multinode-661000", held for 14.524292ms
	W0917 02:30:36.031803    3800 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:30:36.031835    3800 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:30:36.031840    3800 start.go:729] Will try again in 5 seconds ...
	I0917 02:30:41.034021    3800 start.go:360] acquireMachinesLock for multinode-661000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:30:41.034508    3800 start.go:364] duration metric: took 385.125µs to acquireMachinesLock for "multinode-661000"
	I0917 02:30:41.034665    3800 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:30:41.034683    3800 fix.go:54] fixHost starting: 
	I0917 02:30:41.035458    3800 fix.go:112] recreateIfNeeded on multinode-661000: state=Stopped err=<nil>
	W0917 02:30:41.035490    3800 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:30:41.044900    3800 out.go:177] * Restarting existing qemu2 VM for "multinode-661000" ...
	I0917 02:30:41.048914    3800 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:30:41.049173    3800 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:37:cc:ca:3a:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/multinode-661000/disk.qcow2
	I0917 02:30:41.058682    3800 main.go:141] libmachine: STDOUT: 
	I0917 02:30:41.058753    3800 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:30:41.058842    3800 fix.go:56] duration metric: took 24.154ms for fixHost
	I0917 02:30:41.058862    3800 start.go:83] releasing machines lock for "multinode-661000", held for 24.331833ms
	W0917 02:30:41.059093    3800 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-661000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-661000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:30:41.066823    3800 out.go:201] 
	W0917 02:30:41.070936    3800 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:30:41.070962    3800 out.go:270] * 
	* 
	W0917 02:30:41.073410    3800 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:30:41.082826    3800 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-661000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (69.832167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-661000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-661000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-661000-m01 --driver=qemu2 : exit status 80 (10.044639625s)

                                                
                                                
-- stdout --
	* [multinode-661000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-661000-m01" primary control-plane node in "multinode-661000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-661000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-661000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-661000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-661000-m02 --driver=qemu2 : exit status 80 (10.049491209s)

                                                
                                                
-- stdout --
	* [multinode-661000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-661000-m02" primary control-plane node in "multinode-661000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-661000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-661000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-661000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-661000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-661000: exit status 83 (79.518208ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-661000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-661000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-661000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-661000 -n multinode-661000: exit status 7 (30.724333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-661000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.32s)

                                                
                                    
x
+
TestPreload (9.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-773000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-773000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.823522458s)

                                                
                                                
-- stdout --
	* [test-preload-773000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-773000" primary control-plane node in "test-preload-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:31:01.627121    3855 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:31:01.627259    3855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:31:01.627263    3855 out.go:358] Setting ErrFile to fd 2...
	I0917 02:31:01.627265    3855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:31:01.627396    3855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:31:01.628454    3855 out.go:352] Setting JSON to false
	I0917 02:31:01.644552    3855 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3631,"bootTime":1726561830,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:31:01.644624    3855 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:31:01.650827    3855 out.go:177] * [test-preload-773000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:31:01.657772    3855 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:31:01.657823    3855 notify.go:220] Checking for updates...
	I0917 02:31:01.664686    3855 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:31:01.667756    3855 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:31:01.670755    3855 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:31:01.673675    3855 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:31:01.676728    3855 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:31:01.680129    3855 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:31:01.680177    3855 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:31:01.684738    3855 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:31:01.691718    3855 start.go:297] selected driver: qemu2
	I0917 02:31:01.691724    3855 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:31:01.691730    3855 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:31:01.694045    3855 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:31:01.696767    3855 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:31:01.699914    3855 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:31:01.699935    3855 cni.go:84] Creating CNI manager for ""
	I0917 02:31:01.699957    3855 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:31:01.699961    3855 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:31:01.699997    3855 start.go:340] cluster config:
	{Name:test-preload-773000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:31:01.703703    3855 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:01.711689    3855 out.go:177] * Starting "test-preload-773000" primary control-plane node in "test-preload-773000" cluster
	I0917 02:31:01.715770    3855 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0917 02:31:01.715857    3855 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/test-preload-773000/config.json ...
	I0917 02:31:01.715879    3855 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/test-preload-773000/config.json: {Name:mk2997475b6fcf717409fc1076262ca2105fda8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:31:01.715891    3855 cache.go:107] acquiring lock: {Name:mkab1e37cbc263e4ad02c96576bb0c71290ec7b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:01.715893    3855 cache.go:107] acquiring lock: {Name:mk004c197445c172a9681e92e58cfc082246c3da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:01.715913    3855 cache.go:107] acquiring lock: {Name:mk25f0d9e71a944b47a477dfb1d68ec6da9ea6f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:01.715897    3855 cache.go:107] acquiring lock: {Name:mk9d8a1f08e46b8c6eb181a98e625d8eea7ca961 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:01.716061    3855 cache.go:107] acquiring lock: {Name:mke1a55ffcc269f3721977fad5703c26fd3737f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:01.716119    3855 cache.go:107] acquiring lock: {Name:mk4e6d40d3060f2ad8d01be369371f1a2b3df592 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:01.716143    3855 cache.go:107] acquiring lock: {Name:mk208475046066cedc4ae5ba56dc71027490101d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:01.716156    3855 cache.go:107] acquiring lock: {Name:mkc5255da7ce5690fe84fc78bb43d669469f76ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:31:01.716319    3855 start.go:360] acquireMachinesLock for test-preload-773000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:31:01.716362    3855 start.go:364] duration metric: took 33.542µs to acquireMachinesLock for "test-preload-773000"
	I0917 02:31:01.716477    3855 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0917 02:31:01.716487    3855 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0917 02:31:01.716501    3855 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0917 02:31:01.716458    3855 start.go:93] Provisioning new machine with config: &{Name:test-preload-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:31:01.716530    3855 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:31:01.716538    3855 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:31:01.716501    3855 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0917 02:31:01.716566    3855 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0917 02:31:01.716691    3855 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:31:01.716483    3855 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0917 02:31:01.719737    3855 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:31:01.728244    3855 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0917 02:31:01.728341    3855 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0917 02:31:01.728361    3855 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0917 02:31:01.728396    3855 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0917 02:31:01.730497    3855 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0917 02:31:01.730547    3855 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:31:01.730571    3855 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:31:01.730608    3855 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0917 02:31:01.737593    3855 start.go:159] libmachine.API.Create for "test-preload-773000" (driver="qemu2")
	I0917 02:31:01.737611    3855 client.go:168] LocalClient.Create starting
	I0917 02:31:01.737683    3855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:31:01.737714    3855 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:01.737722    3855 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:01.737767    3855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:31:01.737790    3855 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:01.737797    3855 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:01.738190    3855 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:31:01.892635    3855 main.go:141] libmachine: Creating SSH key...
	I0917 02:31:02.017838    3855 main.go:141] libmachine: Creating Disk image...
	I0917 02:31:02.017867    3855 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:31:02.018143    3855 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2
	I0917 02:31:02.028335    3855 main.go:141] libmachine: STDOUT: 
	I0917 02:31:02.028355    3855 main.go:141] libmachine: STDERR: 
	I0917 02:31:02.028421    3855 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2 +20000M
	I0917 02:31:02.037371    3855 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:31:02.037393    3855 main.go:141] libmachine: STDERR: 
	I0917 02:31:02.037404    3855 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2
	I0917 02:31:02.037409    3855 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:31:02.037426    3855 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:31:02.037468    3855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:42:58:cf:c4:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2
	I0917 02:31:02.039492    3855 main.go:141] libmachine: STDOUT: 
	I0917 02:31:02.039510    3855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:31:02.039531    3855 client.go:171] duration metric: took 301.915416ms to LocalClient.Create
	I0917 02:31:02.305780    3855 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0917 02:31:02.308260    3855 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0917 02:31:02.311367    3855 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0917 02:31:02.341825    3855 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0917 02:31:02.343261    3855 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0917 02:31:02.343285    3855 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0917 02:31:02.355283    3855 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0917 02:31:02.405739    3855 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0917 02:31:02.540374    3855 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0917 02:31:02.540435    3855 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 824.520375ms
	I0917 02:31:02.540474    3855 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0917 02:31:02.926540    3855 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0917 02:31:02.926679    3855 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 02:31:03.737200    3855 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0917 02:31:03.737260    3855 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.021206417s
	I0917 02:31:03.737290    3855 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0917 02:31:03.859841    3855 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 02:31:03.859918    3855 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.144037333s
	I0917 02:31:03.859953    3855 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 02:31:04.039688    3855 start.go:128] duration metric: took 2.32314175s to createHost
	I0917 02:31:04.039728    3855 start.go:83] releasing machines lock for "test-preload-773000", held for 2.323366459s
	W0917 02:31:04.039768    3855 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:04.050689    3855 out.go:177] * Deleting "test-preload-773000" in qemu2 ...
	W0917 02:31:04.080116    3855 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:04.080140    3855 start.go:729] Will try again in 5 seconds ...
	I0917 02:31:05.302842    3855 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0917 02:31:05.302891    3855 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.586766167s
	I0917 02:31:05.302918    3855 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0917 02:31:06.835794    3855 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0917 02:31:06.835860    3855 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.119989292s
	I0917 02:31:06.835889    3855 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0917 02:31:06.881502    3855 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0917 02:31:06.881545    3855 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.165685959s
	I0917 02:31:06.881568    3855 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0917 02:31:07.967132    3855 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0917 02:31:07.967197    3855 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.25110225s
	I0917 02:31:07.967236    3855 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0917 02:31:09.080273    3855 start.go:360] acquireMachinesLock for test-preload-773000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:31:09.080692    3855 start.go:364] duration metric: took 340.333µs to acquireMachinesLock for "test-preload-773000"
	I0917 02:31:09.080809    3855 start.go:93] Provisioning new machine with config: &{Name:test-preload-773000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-773000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:31:09.081049    3855 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:31:09.085765    3855 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:31:09.136849    3855 start.go:159] libmachine.API.Create for "test-preload-773000" (driver="qemu2")
	I0917 02:31:09.136895    3855 client.go:168] LocalClient.Create starting
	I0917 02:31:09.137012    3855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:31:09.137101    3855 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:09.137137    3855 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:09.137207    3855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:31:09.137254    3855 main.go:141] libmachine: Decoding PEM data...
	I0917 02:31:09.137270    3855 main.go:141] libmachine: Parsing certificate...
	I0917 02:31:09.137826    3855 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:31:09.305827    3855 main.go:141] libmachine: Creating SSH key...
	I0917 02:31:09.349670    3855 main.go:141] libmachine: Creating Disk image...
	I0917 02:31:09.349677    3855 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:31:09.349880    3855 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2
	I0917 02:31:09.359375    3855 main.go:141] libmachine: STDOUT: 
	I0917 02:31:09.359390    3855 main.go:141] libmachine: STDERR: 
	I0917 02:31:09.359450    3855 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2 +20000M
	I0917 02:31:09.367561    3855 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:31:09.367574    3855 main.go:141] libmachine: STDERR: 
	I0917 02:31:09.367591    3855 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2
	I0917 02:31:09.367597    3855 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:31:09.367607    3855 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:31:09.367639    3855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:67:95:e3:28:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/test-preload-773000/disk.qcow2
	I0917 02:31:09.369352    3855 main.go:141] libmachine: STDOUT: 
	I0917 02:31:09.369367    3855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:31:09.369379    3855 client.go:171] duration metric: took 232.480125ms to LocalClient.Create
	I0917 02:31:10.373625    3855 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0917 02:31:10.373718    3855 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.657618167s
	I0917 02:31:10.373760    3855 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0917 02:31:10.373811    3855 cache.go:87] Successfully saved all images to host disk.
	I0917 02:31:11.371557    3855 start.go:128] duration metric: took 2.290465083s to createHost
	I0917 02:31:11.371651    3855 start.go:83] releasing machines lock for "test-preload-773000", held for 2.290906834s
	W0917 02:31:11.371983    3855 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:31:11.387755    3855 out.go:201] 
	W0917 02:31:11.391836    3855 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:31:11.391862    3855 out.go:270] * 
	* 
	W0917 02:31:11.394376    3855 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:31:11.407570    3855 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-773000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-17 02:31:11.425349 -0700 PDT m=+3226.710100334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-773000 -n test-preload-773000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-773000 -n test-preload-773000: exit status 7 (67.050459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-773000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-773000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-773000
--- FAIL: TestPreload (9.97s)

                                                
                                    
x
+
TestScheduledStopUnix (9.96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-404000 --memory=2048 --driver=qemu2 
E0917 02:31:18.065171    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-404000 --memory=2048 --driver=qemu2 : exit status 80 (9.807885667s)

                                                
                                                
-- stdout --
	* [scheduled-stop-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-404000" primary control-plane node in "scheduled-stop-404000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-404000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-404000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-404000" primary control-plane node in "scheduled-stop-404000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-404000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-17 02:31:21.380906 -0700 PDT m=+3236.665704751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-404000 -n scheduled-stop-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-404000 -n scheduled-stop-404000: exit status 7 (67.1625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-404000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-404000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-404000
--- FAIL: TestScheduledStopUnix (9.96s)

                                                
                                    
x
+
TestSkaffold (12.79s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3108501218 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3108501218 version: (1.067232s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-218000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-218000 --memory=2600 --driver=qemu2 : exit status 80 (9.934816708s)

                                                
                                                
-- stdout --
	* [skaffold-218000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-218000" primary control-plane node in "skaffold-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-218000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-218000" primary control-plane node in "skaffold-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-17 02:31:34.173219 -0700 PDT m=+3249.458078751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-218000 -n skaffold-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-218000 -n skaffold-218000: exit status 7 (64.391333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-218000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-218000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-218000
--- FAIL: TestSkaffold (12.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (605.66s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1260592706 start -p running-upgrade-202000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1260592706 start -p running-upgrade-202000 --memory=2200 --vm-driver=qemu2 : (52.784024375s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-202000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0917 02:34:39.510237    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-202000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m39.054761459s)

                                                
                                                
-- stdout --
	* [running-upgrade-202000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-202000" primary control-plane node in "running-upgrade-202000" cluster
	* Updating the running qemu2 "running-upgrade-202000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:33:11.686877    4234 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:33:11.686998    4234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:33:11.687001    4234 out.go:358] Setting ErrFile to fd 2...
	I0917 02:33:11.687004    4234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:33:11.687148    4234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:33:11.688247    4234 out.go:352] Setting JSON to false
	I0917 02:33:11.704881    4234 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3761,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:33:11.704984    4234 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:33:11.711422    4234 out.go:177] * [running-upgrade-202000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:33:11.718447    4234 notify.go:220] Checking for updates...
	I0917 02:33:11.722409    4234 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:33:11.726362    4234 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:33:11.730375    4234 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:33:11.733386    4234 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:33:11.736403    4234 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:33:11.739543    4234 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:33:11.742638    4234 config.go:182] Loaded profile config "running-upgrade-202000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:33:11.745370    4234 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 02:33:11.748384    4234 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:33:11.752388    4234 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:33:11.759323    4234 start.go:297] selected driver: qemu2
	I0917 02:33:11.759329    4234 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-202000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-202000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 02:33:11.759384    4234 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:33:11.761781    4234 cni.go:84] Creating CNI manager for ""
	I0917 02:33:11.761816    4234 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:33:11.761844    4234 start.go:340] cluster config:
	{Name:running-upgrade-202000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-202000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 02:33:11.761898    4234 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:33:11.767377    4234 out.go:177] * Starting "running-upgrade-202000" primary control-plane node in "running-upgrade-202000" cluster
	I0917 02:33:11.771328    4234 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 02:33:11.771354    4234 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0917 02:33:11.771358    4234 cache.go:56] Caching tarball of preloaded images
	I0917 02:33:11.771433    4234 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:33:11.771440    4234 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0917 02:33:11.771496    4234 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/config.json ...
	I0917 02:33:11.771798    4234 start.go:360] acquireMachinesLock for running-upgrade-202000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:33:11.771836    4234 start.go:364] duration metric: took 31.125µs to acquireMachinesLock for "running-upgrade-202000"
	I0917 02:33:11.771845    4234 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:33:11.771852    4234 fix.go:54] fixHost starting: 
	I0917 02:33:11.772473    4234 fix.go:112] recreateIfNeeded on running-upgrade-202000: state=Running err=<nil>
	W0917 02:33:11.772481    4234 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:33:11.780395    4234 out.go:177] * Updating the running qemu2 "running-upgrade-202000" VM ...
	I0917 02:33:11.783304    4234 machine.go:93] provisionDockerMachine start ...
	I0917 02:33:11.783368    4234 main.go:141] libmachine: Using SSH client type: native
	I0917 02:33:11.783520    4234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dad190] 0x104daf9d0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0917 02:33:11.783527    4234 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:33:11.835531    4234 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-202000
	
	I0917 02:33:11.835544    4234 buildroot.go:166] provisioning hostname "running-upgrade-202000"
	I0917 02:33:11.835598    4234 main.go:141] libmachine: Using SSH client type: native
	I0917 02:33:11.835715    4234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dad190] 0x104daf9d0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0917 02:33:11.835721    4234 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-202000 && echo "running-upgrade-202000" | sudo tee /etc/hostname
	I0917 02:33:11.894687    4234 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-202000
	
	I0917 02:33:11.894751    4234 main.go:141] libmachine: Using SSH client type: native
	I0917 02:33:11.894869    4234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dad190] 0x104daf9d0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0917 02:33:11.894878    4234 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-202000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-202000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-202000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:33:11.945650    4234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:33:11.945663    4234 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1056/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1056/.minikube}
	I0917 02:33:11.945672    4234 buildroot.go:174] setting up certificates
	I0917 02:33:11.945677    4234 provision.go:84] configureAuth start
	I0917 02:33:11.945681    4234 provision.go:143] copyHostCerts
	I0917 02:33:11.945759    4234 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.pem, removing ...
	I0917 02:33:11.945766    4234 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.pem
	I0917 02:33:11.945887    4234 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.pem (1082 bytes)
	I0917 02:33:11.946060    4234 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1056/.minikube/cert.pem, removing ...
	I0917 02:33:11.946066    4234 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1056/.minikube/cert.pem
	I0917 02:33:11.946108    4234 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/cert.pem (1123 bytes)
	I0917 02:33:11.946199    4234 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1056/.minikube/key.pem, removing ...
	I0917 02:33:11.946202    4234 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1056/.minikube/key.pem
	I0917 02:33:11.946249    4234 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/key.pem (1675 bytes)
	I0917 02:33:11.946334    4234 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-202000 san=[127.0.0.1 localhost minikube running-upgrade-202000]
	I0917 02:33:12.089817    4234 provision.go:177] copyRemoteCerts
	I0917 02:33:12.089872    4234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:33:12.089881    4234 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/running-upgrade-202000/id_rsa Username:docker}
	I0917 02:33:12.116952    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 02:33:12.123908    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 02:33:12.130508    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 02:33:12.137606    4234 provision.go:87] duration metric: took 191.920542ms to configureAuth
	I0917 02:33:12.137615    4234 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:33:12.137719    4234 config.go:182] Loaded profile config "running-upgrade-202000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:33:12.137766    4234 main.go:141] libmachine: Using SSH client type: native
	I0917 02:33:12.137852    4234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dad190] 0x104daf9d0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0917 02:33:12.137857    4234 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:33:12.192448    4234 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:33:12.192459    4234 buildroot.go:70] root file system type: tmpfs
	I0917 02:33:12.192512    4234 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:33:12.192582    4234 main.go:141] libmachine: Using SSH client type: native
	I0917 02:33:12.192705    4234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dad190] 0x104daf9d0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0917 02:33:12.192737    4234 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:33:12.255657    4234 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:33:12.255719    4234 main.go:141] libmachine: Using SSH client type: native
	I0917 02:33:12.255836    4234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dad190] 0x104daf9d0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0917 02:33:12.255846    4234 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:33:12.313312    4234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:33:12.313326    4234 machine.go:96] duration metric: took 530.017667ms to provisionDockerMachine
	I0917 02:33:12.313332    4234 start.go:293] postStartSetup for "running-upgrade-202000" (driver="qemu2")
	I0917 02:33:12.313339    4234 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:33:12.313396    4234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:33:12.313413    4234 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/running-upgrade-202000/id_rsa Username:docker}
	I0917 02:33:12.341395    4234 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:33:12.342732    4234 info.go:137] Remote host: Buildroot 2021.02.12
	I0917 02:33:12.342740    4234 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1056/.minikube/addons for local assets ...
	I0917 02:33:12.342812    4234 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1056/.minikube/files for local assets ...
	I0917 02:33:12.342914    4234 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem -> 15552.pem in /etc/ssl/certs
	I0917 02:33:12.343009    4234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:33:12.345777    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem --> /etc/ssl/certs/15552.pem (1708 bytes)
	I0917 02:33:12.352917    4234 start.go:296] duration metric: took 39.5795ms for postStartSetup
	I0917 02:33:12.352930    4234 fix.go:56] duration metric: took 581.084167ms for fixHost
	I0917 02:33:12.352978    4234 main.go:141] libmachine: Using SSH client type: native
	I0917 02:33:12.353084    4234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dad190] 0x104daf9d0 <nil>  [] 0s} localhost 50236 <nil> <nil>}
	I0917 02:33:12.353088    4234 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:33:12.403043    4234 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726565591.977812179
	
	I0917 02:33:12.403053    4234 fix.go:216] guest clock: 1726565591.977812179
	I0917 02:33:12.403057    4234 fix.go:229] Guest: 2024-09-17 02:33:11.977812179 -0700 PDT Remote: 2024-09-17 02:33:12.352932 -0700 PDT m=+0.685916751 (delta=-375.119821ms)
	I0917 02:33:12.403069    4234 fix.go:200] guest clock delta is within tolerance: -375.119821ms
	I0917 02:33:12.403073    4234 start.go:83] releasing machines lock for "running-upgrade-202000", held for 631.233834ms
	I0917 02:33:12.403151    4234 ssh_runner.go:195] Run: cat /version.json
	I0917 02:33:12.403163    4234 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/running-upgrade-202000/id_rsa Username:docker}
	I0917 02:33:12.403151    4234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:33:12.403190    4234 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/running-upgrade-202000/id_rsa Username:docker}
	W0917 02:33:12.403794    4234 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50361->127.0.0.1:50236: read: connection reset by peer
	I0917 02:33:12.403810    4234 retry.go:31] will retry after 270.090144ms: ssh: handshake failed: read tcp 127.0.0.1:50361->127.0.0.1:50236: read: connection reset by peer
	W0917 02:33:12.704929    4234 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0917 02:33:12.705001    4234 ssh_runner.go:195] Run: systemctl --version
	I0917 02:33:12.708644    4234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 02:33:12.711314    4234 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:33:12.711364    4234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0917 02:33:12.714657    4234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0917 02:33:12.719409    4234 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:33:12.719421    4234 start.go:495] detecting cgroup driver to use...
	I0917 02:33:12.719498    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:33:12.725271    4234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0917 02:33:12.728116    4234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:33:12.730937    4234 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:33:12.730970    4234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:33:12.734337    4234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:33:12.737876    4234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:33:12.741092    4234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:33:12.744109    4234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:33:12.747117    4234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:33:12.750503    4234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:33:12.754059    4234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:33:12.757421    4234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:33:12.760209    4234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:33:12.762873    4234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:33:12.862115    4234 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:33:12.869608    4234 start.go:495] detecting cgroup driver to use...
	I0917 02:33:12.869681    4234 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:33:12.875798    4234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:33:12.884326    4234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:33:12.890072    4234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:33:12.895312    4234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:33:12.900102    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:33:12.905120    4234 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:33:12.906539    4234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:33:12.909574    4234 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0917 02:33:12.914510    4234 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:33:13.003067    4234 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:33:13.095873    4234 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:33:13.095927    4234 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:33:13.101118    4234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:33:13.188460    4234 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:33:30.664160    4234 ssh_runner.go:235] Completed: sudo systemctl restart docker: (17.47576325s)
	I0917 02:33:30.664236    4234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:33:30.669084    4234 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:33:30.676151    4234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:33:30.682922    4234 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:33:30.763569    4234 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:33:30.841188    4234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:33:30.919878    4234 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:33:30.925544    4234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:33:30.930002    4234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:33:31.007610    4234 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:33:31.050765    4234 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:33:31.050863    4234 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:33:31.053304    4234 start.go:563] Will wait 60s for crictl version
	I0917 02:33:31.053366    4234 ssh_runner.go:195] Run: which crictl
	I0917 02:33:31.054694    4234 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:33:31.066937    4234 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0917 02:33:31.067022    4234 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:33:31.079904    4234 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:33:31.100418    4234 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0917 02:33:31.100511    4234 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0917 02:33:31.101902    4234 kubeadm.go:883] updating cluster {Name:running-upgrade-202000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-202000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0917 02:33:31.101949    4234 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 02:33:31.101999    4234 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:33:31.112186    4234 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 02:33:31.112194    4234 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 02:33:31.112247    4234 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 02:33:31.115274    4234 ssh_runner.go:195] Run: which lz4
	I0917 02:33:31.116602    4234 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 02:33:31.117774    4234 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 02:33:31.117783    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0917 02:33:32.107929    4234 docker.go:649] duration metric: took 991.378375ms to copy over tarball
	I0917 02:33:32.107994    4234 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 02:33:33.204342    4234 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.096330334s)
	I0917 02:33:33.204358    4234 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 02:33:33.219627    4234 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 02:33:33.222557    4234 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0917 02:33:33.227616    4234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:33:33.298658    4234 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:33:34.478905    4234 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.180237209s)
	I0917 02:33:34.479009    4234 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:33:34.489632    4234 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 02:33:34.489642    4234 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 02:33:34.489647    4234 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 02:33:34.495096    4234 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:33:34.496858    4234 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:33:34.498059    4234 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:33:34.498195    4234 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0917 02:33:34.499634    4234 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:33:34.499728    4234 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0917 02:33:34.500658    4234 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:33:34.500874    4234 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0917 02:33:34.501959    4234 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0917 02:33:34.502468    4234 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:33:34.503442    4234 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:33:34.503606    4234 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:33:34.504572    4234 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:33:34.504682    4234 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:33:34.505576    4234 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:33:34.506323    4234 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:33:34.857537    4234 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:33:34.870047    4234 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0917 02:33:34.870075    4234 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:33:34.870141    4234 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:33:34.876486    4234 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0917 02:33:34.882810    4234 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0917 02:33:34.889275    4234 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0917 02:33:34.889294    4234 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0917 02:33:34.889364    4234 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0917 02:33:34.899158    4234 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0917 02:33:34.899283    4234 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0917 02:33:34.900999    4234 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0917 02:33:34.901009    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0917 02:33:34.902782    4234 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0917 02:33:34.909506    4234 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0917 02:33:34.909520    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0917 02:33:34.914063    4234 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0917 02:33:34.914082    4234 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0917 02:33:34.914161    4234 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0917 02:33:34.933707    4234 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	W0917 02:33:34.943531    4234 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0917 02:33:34.944150    4234 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:33:34.949707    4234 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:33:34.950980    4234 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0917 02:33:34.951103    4234 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0917 02:33:34.951116    4234 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0917 02:33:34.956684    4234 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0917 02:33:34.956708    4234 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:33:34.956765    4234 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:33:34.977527    4234 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0917 02:33:34.977552    4234 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:33:34.977622    4234 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:33:34.977642    4234 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0917 02:33:34.977656    4234 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:33:34.977691    4234 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:33:34.977712    4234 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0917 02:33:34.977727    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0917 02:33:34.988121    4234 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0917 02:33:35.008728    4234 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0917 02:33:35.010427    4234 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0917 02:33:35.010570    4234 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0917 02:33:35.016938    4234 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0917 02:33:35.016982    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0917 02:33:35.018654    4234 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:33:35.058454    4234 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0917 02:33:35.058483    4234 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:33:35.058568    4234 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:33:35.101677    4234 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0917 02:33:35.108718    4234 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0917 02:33:35.108734    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0917 02:33:35.226328    4234 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0917 02:33:35.304662    4234 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0917 02:33:35.304676    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0917 02:33:35.377540    4234 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0917 02:33:35.377670    4234 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:33:35.450831    4234 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0917 02:33:35.450906    4234 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0917 02:33:35.450925    4234 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:33:35.450988    4234 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:33:36.972442    4234 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.521429041s)
	I0917 02:33:36.972474    4234 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 02:33:36.972842    4234 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 02:33:36.978860    4234 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0917 02:33:36.978901    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0917 02:33:37.037248    4234 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 02:33:37.037263    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0917 02:33:37.276634    4234 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 02:33:37.276670    4234 cache_images.go:92] duration metric: took 2.787030625s to LoadCachedImages
	W0917 02:33:37.276708    4234 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0917 02:33:37.276716    4234 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0917 02:33:37.276764    4234 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-202000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-202000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:33:37.276839    4234 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:33:37.292580    4234 cni.go:84] Creating CNI manager for ""
	I0917 02:33:37.292597    4234 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:33:37.292606    4234 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:33:37.292617    4234 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-202000 NodeName:running-upgrade-202000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:33:37.292682    4234 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-202000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:33:37.292736    4234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0917 02:33:37.295831    4234 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:33:37.295867    4234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 02:33:37.298639    4234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0917 02:33:37.303873    4234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:33:37.309426    4234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0917 02:33:37.314965    4234 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0917 02:33:37.316503    4234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:33:37.404396    4234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:33:37.409688    4234 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000 for IP: 10.0.2.15
	I0917 02:33:37.409696    4234 certs.go:194] generating shared ca certs ...
	I0917 02:33:37.409704    4234 certs.go:226] acquiring lock for ca certs: {Name:mkff5fc329c6145be4c1381e1b58175b65aa8cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:33:37.409858    4234 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.key
	I0917 02:33:37.409895    4234 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.key
	I0917 02:33:37.409904    4234 certs.go:256] generating profile certs ...
	I0917 02:33:37.409966    4234 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/client.key
	I0917 02:33:37.409984    4234 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.key.bb9e4622
	I0917 02:33:37.409993    4234 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.crt.bb9e4622 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0917 02:33:37.531598    4234 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.crt.bb9e4622 ...
	I0917 02:33:37.531604    4234 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.crt.bb9e4622: {Name:mkdd0db7fbd66000253f31452b35df3f1b696c46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:33:37.532324    4234 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.key.bb9e4622 ...
	I0917 02:33:37.532329    4234 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.key.bb9e4622: {Name:mk23c0342e69d889c538cce40181a89a654a8441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:33:37.532495    4234 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.crt.bb9e4622 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.crt
	I0917 02:33:37.532694    4234 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.key.bb9e4622 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.key
	I0917 02:33:37.532834    4234 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/proxy-client.key
	I0917 02:33:37.532955    4234 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/1555.pem (1338 bytes)
	W0917 02:33:37.532978    4234 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/1555_empty.pem, impossibly tiny 0 bytes
	I0917 02:33:37.532984    4234 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 02:33:37.533004    4234 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem (1082 bytes)
	I0917 02:33:37.533023    4234 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:33:37.533042    4234 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem (1675 bytes)
	I0917 02:33:37.533085    4234 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem (1708 bytes)
	I0917 02:33:37.533425    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:33:37.541096    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 02:33:37.549032    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:33:37.556090    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:33:37.562959    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 02:33:37.570960    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 02:33:37.577846    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:33:37.585428    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 02:33:37.592464    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:33:37.599191    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/1555.pem --> /usr/share/ca-certificates/1555.pem (1338 bytes)
	I0917 02:33:37.606143    4234 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem --> /usr/share/ca-certificates/15552.pem (1708 bytes)
	I0917 02:33:37.613419    4234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:33:37.618612    4234 ssh_runner.go:195] Run: openssl version
	I0917 02:33:37.620630    4234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:33:37.623637    4234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:33:37.625156    4234 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:33:37.625185    4234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:33:37.627028    4234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:33:37.630164    4234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1555.pem && ln -fs /usr/share/ca-certificates/1555.pem /etc/ssl/certs/1555.pem"
	I0917 02:33:37.633587    4234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1555.pem
	I0917 02:33:37.634969    4234 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:53 /usr/share/ca-certificates/1555.pem
	I0917 02:33:37.634997    4234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1555.pem
	I0917 02:33:37.636969    4234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1555.pem /etc/ssl/certs/51391683.0"
	I0917 02:33:37.639571    4234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15552.pem && ln -fs /usr/share/ca-certificates/15552.pem /etc/ssl/certs/15552.pem"
	I0917 02:33:37.642740    4234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15552.pem
	I0917 02:33:37.644270    4234 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:53 /usr/share/ca-certificates/15552.pem
	I0917 02:33:37.644298    4234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15552.pem
	I0917 02:33:37.646045    4234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15552.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:33:37.649201    4234 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:33:37.650686    4234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:33:37.652420    4234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:33:37.654109    4234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:33:37.656033    4234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:33:37.658043    4234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:33:37.659907    4234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:33:37.661754    4234 kubeadm.go:392] StartCluster: {Name:running-upgrade-202000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50268 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-202000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 02:33:37.661829    4234 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:33:37.672102    4234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:33:37.675358    4234 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:33:37.675367    4234 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:33:37.675398    4234 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:33:37.678499    4234 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:33:37.678766    4234 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-202000" does not appear in /Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:33:37.678812    4234 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1056/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-202000" cluster setting kubeconfig missing "running-upgrade-202000" context setting]
	I0917 02:33:37.678961    4234 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/kubeconfig: {Name:mkb79e559d17024b096623143f764244ebf5b237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:33:37.680056    4234 kapi.go:59] client config for running-upgrade-202000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106385800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:33:37.680384    4234 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:33:37.683233    4234 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-202000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0917 02:33:37.683239    4234 kubeadm.go:1160] stopping kube-system containers ...
	I0917 02:33:37.683291    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:33:37.697192    4234 docker.go:483] Stopping containers: [11cadc5c740e fa7c911aa056 678c01eacfd1 67b5d50aae99 a2fd9db7db24 8a41a9b8943b 52db91921966 8cb9d51bec3f 5aef8cfb7a95 d5ee745e2bc1 e6d9a021a342 ae3fa2878147 006b30ba4ec6]
	I0917 02:33:37.697274    4234 ssh_runner.go:195] Run: docker stop 11cadc5c740e fa7c911aa056 678c01eacfd1 67b5d50aae99 a2fd9db7db24 8a41a9b8943b 52db91921966 8cb9d51bec3f 5aef8cfb7a95 d5ee745e2bc1 e6d9a021a342 ae3fa2878147 006b30ba4ec6
	I0917 02:33:37.708718    4234 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 02:33:37.800797    4234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 02:33:37.805354    4234 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Sep 17 09:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep 17 09:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 17 09:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 17 09:33 /etc/kubernetes/scheduler.conf
	
	I0917 02:33:37.805392    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf
	I0917 02:33:37.808857    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:33:37.808896    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 02:33:37.812444    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf
	I0917 02:33:37.815343    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:33:37.815371    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 02:33:37.818385    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf
	I0917 02:33:37.821473    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:33:37.821500    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 02:33:37.824506    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf
	I0917 02:33:37.827064    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:33:37.827091    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 02:33:37.830154    4234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 02:33:37.833639    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:33:37.857064    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:33:38.470408    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:33:38.676601    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:33:38.696489    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:33:38.720937    4234 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:33:38.721026    4234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:33:39.223462    4234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:33:39.723114    4234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:33:39.727462    4234 api_server.go:72] duration metric: took 1.006531583s to wait for apiserver process to appear ...
	I0917 02:33:39.727471    4234 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:33:39.727486    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:33:44.729570    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:33:44.729621    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:33:49.730140    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:33:49.730237    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:33:54.731353    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:33:54.731452    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:33:59.732892    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:33:59.732985    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:34:04.734960    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:34:04.735040    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:34:09.737226    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:34:09.737334    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:34:14.740165    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:34:14.740264    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:34:19.741510    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:34:19.741608    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:34:24.744342    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:34:24.744437    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:34:29.747166    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:34:29.747265    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:34:34.749958    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:34:34.750052    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:34:39.752410    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:34:39.752981    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:34:39.793315    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:34:39.793484    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:34:39.820505    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:34:39.820610    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:34:39.834487    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:34:39.834572    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:34:39.846784    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:34:39.846874    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:34:39.857813    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:34:39.857898    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:34:39.868674    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:34:39.868752    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:34:39.883406    4234 logs.go:276] 0 containers: []
	W0917 02:34:39.883420    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:34:39.883499    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:34:39.894448    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:34:39.894469    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:34:39.894476    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:34:39.969147    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:34:39.969158    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:34:39.987935    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:34:39.987946    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:34:39.999860    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:34:39.999871    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:34:40.013205    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:34:40.013226    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:34:40.028201    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:34:40.028212    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:34:40.040245    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:34:40.040255    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:34:40.064736    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:34:40.064744    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:34:40.079260    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:34:40.079269    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:34:40.090733    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:34:40.090743    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:34:40.131589    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:34:40.131600    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:34:40.145151    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:34:40.145160    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:34:40.157945    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:34:40.157956    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:34:40.175305    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:34:40.175315    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:34:40.211116    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:34:40.211124    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:34:40.215420    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:34:40.215429    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:34:40.229560    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:34:40.229570    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:34:42.746615    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:34:47.749223    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:34:47.749774    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:34:47.792229    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:34:47.792412    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:34:47.816771    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:34:47.816911    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:34:47.831716    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:34:47.831823    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:34:47.844218    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:34:47.844305    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:34:47.854733    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:34:47.854825    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:34:47.869041    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:34:47.869128    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:34:47.879312    4234 logs.go:276] 0 containers: []
	W0917 02:34:47.879326    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:34:47.879399    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:34:47.890230    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:34:47.890248    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:34:47.890255    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:34:47.927057    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:34:47.927065    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:34:47.931372    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:34:47.931381    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:34:47.968126    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:34:47.968136    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:34:47.982656    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:34:47.982667    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:34:47.997259    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:34:47.997270    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:34:48.014210    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:34:48.014220    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:34:48.025222    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:34:48.025232    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:34:48.052853    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:34:48.052869    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:34:48.066400    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:34:48.066409    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:34:48.078205    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:34:48.078215    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:34:48.116666    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:34:48.116678    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:34:48.130446    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:34:48.130455    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:34:48.141566    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:34:48.141575    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:34:48.156201    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:34:48.156211    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:34:48.168197    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:34:48.168207    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:34:48.182249    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:34:48.182258    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:34:50.695835    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:34:55.698596    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:34:55.699096    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:34:55.734778    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:34:55.734945    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:34:55.756090    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:34:55.756228    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:34:55.772163    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:34:55.772248    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:34:55.788367    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:34:55.788461    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:34:55.798734    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:34:55.798814    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:34:55.809180    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:34:55.809256    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:34:55.819346    4234 logs.go:276] 0 containers: []
	W0917 02:34:55.819356    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:34:55.819416    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:34:55.830156    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:34:55.830174    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:34:55.830179    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:34:55.867621    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:34:55.867629    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:34:55.886343    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:34:55.886353    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:34:55.901419    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:34:55.901429    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:34:55.927975    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:34:55.927984    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:34:55.946298    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:34:55.946311    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:34:55.960220    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:34:55.960233    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:34:55.971731    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:34:55.971768    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:34:55.975988    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:34:55.975997    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:34:56.010563    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:34:56.010579    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:34:56.021941    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:34:56.021952    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:34:56.038045    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:34:56.038055    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:34:56.076240    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:34:56.076252    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:34:56.097045    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:34:56.097055    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:34:56.111483    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:34:56.111493    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:34:56.122741    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:34:56.122750    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:34:56.134071    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:34:56.134080    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:34:58.649455    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:35:03.652258    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:35:03.652812    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:35:03.697596    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:35:03.697769    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:35:03.717897    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:35:03.718028    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:35:03.732600    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:35:03.732687    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:35:03.744713    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:35:03.744806    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:35:03.755081    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:35:03.755157    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:35:03.765708    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:35:03.765791    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:35:03.776104    4234 logs.go:276] 0 containers: []
	W0917 02:35:03.776114    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:35:03.776199    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:35:03.794758    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:35:03.794774    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:35:03.794780    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:35:03.808862    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:35:03.808874    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:35:03.826694    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:35:03.826706    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:35:03.841035    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:35:03.841045    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:35:03.867787    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:35:03.867795    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:35:03.905511    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:35:03.905524    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:35:03.927375    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:35:03.927386    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:35:03.941199    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:35:03.941209    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:35:03.952761    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:35:03.952771    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:35:03.972180    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:35:03.972188    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:35:03.983799    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:35:03.983808    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:35:04.020970    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:35:04.020984    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:35:04.025607    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:35:04.025613    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:35:04.059973    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:35:04.059986    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:35:04.075438    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:35:04.075449    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:35:04.087138    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:35:04.087149    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:35:04.098944    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:35:04.098956    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:35:06.612932    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:35:11.615793    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:35:11.616350    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:35:11.654610    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:35:11.654784    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:35:11.677414    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:35:11.677542    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:35:11.691761    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:35:11.691849    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:35:11.704427    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:35:11.704515    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:35:11.714985    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:35:11.715069    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:35:11.730835    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:35:11.730911    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:35:11.745954    4234 logs.go:276] 0 containers: []
	W0917 02:35:11.745969    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:35:11.746041    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:35:11.756361    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:35:11.756379    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:35:11.756384    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:35:11.775037    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:35:11.775048    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:35:11.810102    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:35:11.810110    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:35:11.847491    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:35:11.847505    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:35:11.861983    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:35:11.861992    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:35:11.875463    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:35:11.875473    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:35:11.892147    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:35:11.892157    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:35:11.917768    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:35:11.917776    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:35:11.929300    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:35:11.929312    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:35:11.943360    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:35:11.943372    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:35:11.959427    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:35:11.959437    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:35:11.973815    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:35:11.973826    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:35:11.985455    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:35:11.985469    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:35:12.000620    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:35:12.000633    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:35:12.013114    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:35:12.013127    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:35:12.017566    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:35:12.017573    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:35:12.051786    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:35:12.051797    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:35:14.567939    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:35:19.570701    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:35:19.570970    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:35:19.599292    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:35:19.599388    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:35:19.614029    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:35:19.614128    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:35:19.629922    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:35:19.630013    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:35:19.643901    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:35:19.643990    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:35:19.655641    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:35:19.655724    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:35:19.666376    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:35:19.666457    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:35:19.676527    4234 logs.go:276] 0 containers: []
	W0917 02:35:19.676539    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:35:19.676611    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:35:19.686556    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:35:19.686575    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:35:19.686581    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:35:19.700421    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:35:19.700431    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:35:19.711524    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:35:19.711535    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:35:19.736901    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:35:19.736909    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:35:19.748133    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:35:19.748142    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:35:19.783815    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:35:19.783826    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:35:19.797939    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:35:19.797953    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:35:19.812474    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:35:19.812486    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:35:19.849272    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:35:19.849285    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:35:19.866259    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:35:19.866271    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:35:19.877284    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:35:19.877297    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:35:19.891479    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:35:19.891489    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:35:19.905115    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:35:19.905124    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:35:19.916338    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:35:19.916349    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:35:19.927801    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:35:19.927809    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:35:19.932111    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:35:19.932119    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:35:19.967177    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:35:19.967188    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:35:22.484225    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:35:27.486925    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:35:27.487254    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:35:27.515477    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:35:27.515658    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:35:27.533338    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:35:27.533430    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:35:27.549137    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:35:27.549214    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:35:27.561209    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:35:27.561281    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:35:27.571419    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:35:27.571497    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:35:27.581519    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:35:27.581596    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:35:27.593385    4234 logs.go:276] 0 containers: []
	W0917 02:35:27.593397    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:35:27.593459    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:35:27.604856    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:35:27.604874    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:35:27.604879    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:35:27.622580    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:35:27.622591    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:35:27.637097    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:35:27.637108    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:35:27.651892    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:35:27.651903    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:35:27.663935    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:35:27.663950    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:35:27.700658    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:35:27.700669    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:35:27.704802    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:35:27.704810    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:35:27.739498    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:35:27.739513    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:35:27.753686    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:35:27.753700    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:35:27.768131    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:35:27.768147    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:35:27.779750    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:35:27.779765    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:35:27.794477    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:35:27.794489    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:35:27.805764    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:35:27.805776    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:35:27.843348    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:35:27.843357    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:35:27.854499    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:35:27.854510    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:35:27.868009    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:35:27.868019    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:35:27.891901    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:35:27.891910    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:35:30.414261    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:35:35.417116    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:35:35.417684    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:35:35.456947    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:35:35.457108    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:35:35.482123    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:35:35.482245    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:35:35.497434    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:35:35.497527    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:35:35.509555    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:35:35.509635    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:35:35.525096    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:35:35.525181    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:35:35.535424    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:35:35.535505    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:35:35.545488    4234 logs.go:276] 0 containers: []
	W0917 02:35:35.545500    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:35:35.545577    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:35:35.561656    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:35:35.561673    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:35:35.561678    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:35:35.599468    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:35:35.599482    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:35:35.613643    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:35:35.613653    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:35:35.624496    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:35:35.624510    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:35:35.635979    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:35:35.635988    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:35:35.647718    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:35:35.647727    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:35:35.661813    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:35:35.661826    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:35:35.676261    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:35:35.676270    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:35:35.700606    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:35:35.700616    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:35:35.736839    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:35:35.736851    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:35:35.753604    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:35:35.753614    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:35:35.764678    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:35:35.764687    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:35:35.769015    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:35:35.769022    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:35:35.803983    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:35:35.803994    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:35:35.818577    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:35:35.818587    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:35:35.832649    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:35:35.832660    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:35:35.846687    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:35:35.846699    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:35:38.359224    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:35:43.361518    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:35:43.361712    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:35:43.374123    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:35:43.374209    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:35:43.384827    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:35:43.384912    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:35:43.395118    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:35:43.395196    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:35:43.405399    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:35:43.405485    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:35:43.415880    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:35:43.415953    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:35:43.426653    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:35:43.426723    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:35:43.436875    4234 logs.go:276] 0 containers: []
	W0917 02:35:43.436887    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:35:43.436959    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:35:43.447555    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:35:43.447574    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:35:43.447579    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:35:43.452097    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:35:43.452104    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:35:43.489478    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:35:43.489492    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:35:43.503430    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:35:43.503444    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:35:43.517776    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:35:43.517787    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:35:43.539039    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:35:43.539054    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:35:43.553097    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:35:43.553106    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:35:43.566943    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:35:43.566959    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:35:43.604265    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:35:43.604275    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:35:43.619836    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:35:43.619847    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:35:43.645299    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:35:43.645307    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:35:43.657250    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:35:43.657261    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:35:43.669150    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:35:43.669160    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:35:43.686719    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:35:43.686730    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:35:43.698687    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:35:43.698696    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:35:43.736013    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:35:43.736024    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:35:43.747160    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:35:43.747169    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:35:46.260342    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:35:51.262605    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:35:51.262748    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:35:51.274952    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:35:51.275039    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:35:51.287693    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:35:51.287791    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:35:51.305596    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:35:51.305691    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:35:51.316911    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:35:51.316999    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:35:51.337205    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:35:51.337298    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:35:51.349533    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:35:51.349614    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:35:51.359931    4234 logs.go:276] 0 containers: []
	W0917 02:35:51.359945    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:35:51.360027    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:35:51.370855    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:35:51.370872    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:35:51.370878    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:35:51.382369    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:35:51.382378    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:35:51.400847    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:35:51.400862    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:35:51.414984    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:35:51.414996    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:35:51.452781    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:35:51.452789    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:35:51.466297    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:35:51.466306    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:35:51.480440    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:35:51.480455    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:35:51.495201    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:35:51.495212    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:35:51.507122    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:35:51.507132    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:35:51.544325    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:35:51.544335    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:35:51.559150    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:35:51.559164    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:35:51.573087    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:35:51.573101    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:35:51.597226    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:35:51.597232    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:35:51.601108    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:35:51.601114    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:35:51.611989    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:35:51.612000    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:35:51.623458    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:35:51.623471    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:35:51.635457    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:35:51.635469    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:35:54.174750    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:35:59.177586    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:35:59.177836    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:35:59.197490    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:35:59.197587    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:35:59.211216    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:35:59.211303    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:35:59.222739    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:35:59.222825    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:35:59.234616    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:35:59.234703    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:35:59.245603    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:35:59.245688    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:35:59.256303    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:35:59.256383    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:35:59.267326    4234 logs.go:276] 0 containers: []
	W0917 02:35:59.267338    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:35:59.267417    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:35:59.278078    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:35:59.278097    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:35:59.278103    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:35:59.292610    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:35:59.292620    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:35:59.305915    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:35:59.305924    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:35:59.323171    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:35:59.323183    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:35:59.335454    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:35:59.335465    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:35:59.339966    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:35:59.339973    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:35:59.357946    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:35:59.357959    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:35:59.372239    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:35:59.372250    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:35:59.389863    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:35:59.389873    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:35:59.401653    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:35:59.401663    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:35:59.438447    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:35:59.438460    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:35:59.476906    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:35:59.476917    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:35:59.491495    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:35:59.491512    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:35:59.515998    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:35:59.516006    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:35:59.551982    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:35:59.551994    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:35:59.567307    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:35:59.567318    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:35:59.582138    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:35:59.582155    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:36:02.107769    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:36:07.108252    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:36:07.108451    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:36:07.120319    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:36:07.120417    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:36:07.131549    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:36:07.131637    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:36:07.142923    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:36:07.143011    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:36:07.154187    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:36:07.154272    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:36:07.165279    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:36:07.165354    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:36:07.176081    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:36:07.176163    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:36:07.186249    4234 logs.go:276] 0 containers: []
	W0917 02:36:07.186262    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:36:07.186345    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:36:07.197088    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:36:07.197103    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:36:07.197110    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:36:07.221719    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:36:07.221727    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:36:07.258141    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:36:07.258151    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:36:07.272258    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:36:07.272269    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:36:07.292336    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:36:07.292347    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:36:07.304704    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:36:07.304716    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:36:07.319557    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:36:07.319567    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:36:07.334244    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:36:07.334254    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:36:07.346484    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:36:07.346496    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:36:07.358639    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:36:07.358650    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:36:07.395624    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:36:07.395636    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:36:07.440841    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:36:07.440858    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:36:07.452414    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:36:07.452427    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:36:07.466420    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:36:07.466431    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:36:07.473774    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:36:07.473782    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:36:07.489641    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:36:07.489651    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:36:07.506842    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:36:07.506851    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:36:10.021160    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:36:15.024032    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:36:15.024551    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:36:15.064742    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:36:15.064916    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:36:15.091229    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:36:15.091366    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:36:15.105082    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:36:15.105183    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:36:15.117241    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:36:15.117323    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:36:15.133461    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:36:15.133543    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:36:15.144491    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:36:15.144568    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:36:15.155012    4234 logs.go:276] 0 containers: []
	W0917 02:36:15.155024    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:36:15.155094    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:36:15.165956    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:36:15.165975    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:36:15.165980    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:36:15.178519    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:36:15.178531    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:36:15.213685    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:36:15.213698    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:36:15.228106    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:36:15.228116    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:36:15.243722    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:36:15.243734    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:36:15.255558    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:36:15.255571    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:36:15.270750    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:36:15.270760    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:36:15.283011    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:36:15.283026    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:36:15.320211    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:36:15.320221    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:36:15.334782    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:36:15.334792    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:36:15.373358    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:36:15.373369    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:36:15.390187    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:36:15.390196    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:36:15.404689    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:36:15.404699    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:36:15.419193    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:36:15.419203    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:36:15.431132    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:36:15.431142    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:36:15.435796    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:36:15.435802    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:36:15.453624    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:36:15.453633    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:36:17.980457    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:36:22.982985    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:36:22.983122    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:36:22.994622    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:36:22.994705    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:36:23.005538    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:36:23.005635    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:36:23.017738    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:36:23.017828    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:36:23.030100    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:36:23.030203    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:36:23.041945    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:36:23.042040    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:36:23.053472    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:36:23.053560    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:36:23.065166    4234 logs.go:276] 0 containers: []
	W0917 02:36:23.065178    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:36:23.065258    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:36:23.077161    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:36:23.077182    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:36:23.077188    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:36:23.081792    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:36:23.081802    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:36:23.124626    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:36:23.124657    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:36:23.142038    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:36:23.142052    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:36:23.158839    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:36:23.158855    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:36:23.195364    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:36:23.195379    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:36:23.210592    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:36:23.210606    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:36:23.224982    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:36:23.224995    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:36:23.236952    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:36:23.236964    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:36:23.249772    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:36:23.249785    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:36:23.261404    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:36:23.261416    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:36:23.286865    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:36:23.286874    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:36:23.327000    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:36:23.327009    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:36:23.341346    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:36:23.341355    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:36:23.357512    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:36:23.357521    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:36:23.375138    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:36:23.375160    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:36:23.390826    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:36:23.390837    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:36:25.905797    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:36:30.908148    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:36:30.908764    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:36:30.947903    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:36:30.948081    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:36:30.969699    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:36:30.969849    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:36:30.985660    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:36:30.985745    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:36:30.998147    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:36:30.998236    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:36:31.009000    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:36:31.009079    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:36:31.020865    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:36:31.020954    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:36:31.034137    4234 logs.go:276] 0 containers: []
	W0917 02:36:31.034156    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:36:31.034234    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:36:31.047939    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:36:31.047958    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:36:31.047964    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:36:31.062477    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:36:31.062491    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:36:31.074319    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:36:31.074328    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:36:31.096069    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:36:31.096079    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:36:31.119590    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:36:31.119599    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:36:31.132980    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:36:31.132992    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:36:31.137486    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:36:31.137495    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:36:31.149373    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:36:31.149384    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:36:31.168024    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:36:31.168034    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:36:31.182274    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:36:31.182284    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:36:31.197044    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:36:31.197053    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:36:31.211918    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:36:31.211931    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:36:31.223767    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:36:31.223783    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:36:31.258792    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:36:31.258798    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:36:31.294700    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:36:31.294715    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:36:31.334138    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:36:31.334148    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:36:31.348663    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:36:31.348678    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:36:33.862059    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:36:38.864121    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:36:38.864226    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:36:38.882219    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:36:38.882312    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:36:38.893010    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:36:38.893129    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:36:38.903057    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:36:38.903146    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:36:38.913979    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:36:38.914066    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:36:38.924485    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:36:38.924590    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:36:38.935409    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:36:38.935484    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:36:38.945636    4234 logs.go:276] 0 containers: []
	W0917 02:36:38.945647    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:36:38.945722    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:36:38.956499    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:36:38.956518    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:36:38.956524    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:36:38.990584    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:36:38.990596    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:36:39.004555    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:36:39.004565    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:36:39.021954    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:36:39.021965    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:36:39.033357    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:36:39.033368    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:36:39.056327    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:36:39.056333    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:36:39.090960    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:36:39.090967    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:36:39.095476    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:36:39.095484    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:36:39.109572    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:36:39.109582    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:36:39.124156    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:36:39.124165    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:36:39.134952    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:36:39.134961    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:36:39.146824    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:36:39.146833    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:36:39.163407    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:36:39.163417    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:36:39.200721    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:36:39.200736    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:36:39.214888    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:36:39.214899    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:36:39.226570    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:36:39.226586    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:36:39.240664    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:36:39.240674    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:36:41.756151    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:36:46.758779    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:36:46.758900    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:36:46.770536    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:36:46.770625    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:36:46.784374    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:36:46.784465    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:36:46.795292    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:36:46.795374    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:36:46.806154    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:36:46.806235    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:36:46.817258    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:36:46.817335    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:36:46.833577    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:36:46.833656    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:36:46.844723    4234 logs.go:276] 0 containers: []
	W0917 02:36:46.844735    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:36:46.844798    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:36:46.855509    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:36:46.855528    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:36:46.855534    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:36:46.859983    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:36:46.859990    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:36:46.895811    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:36:46.895823    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:36:46.910129    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:36:46.910139    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:36:46.922667    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:36:46.922679    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:36:46.943746    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:36:46.943757    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:36:46.956585    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:36:46.956593    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:36:46.980069    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:36:46.980076    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:36:46.991751    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:36:46.991766    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:36:47.005870    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:36:47.005880    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:36:47.020776    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:36:47.020786    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:36:47.058767    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:36:47.058776    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:36:47.069843    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:36:47.069855    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:36:47.085749    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:36:47.085758    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:36:47.101686    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:36:47.101696    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:36:47.113176    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:36:47.113187    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:36:47.152217    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:36:47.152233    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:36:49.668395    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:36:54.670719    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:36:54.671030    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:36:54.696972    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:36:54.697137    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:36:54.719437    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:36:54.719532    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:36:54.732320    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:36:54.732392    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:36:54.742748    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:36:54.742841    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:36:54.753064    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:36:54.753146    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:36:54.763388    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:36:54.763470    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:36:54.778968    4234 logs.go:276] 0 containers: []
	W0917 02:36:54.778984    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:36:54.779060    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:36:54.790070    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:36:54.790088    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:36:54.790094    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:36:54.801553    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:36:54.801569    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:36:54.825446    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:36:54.825454    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:36:54.837252    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:36:54.837266    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:36:54.854388    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:36:54.854400    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:36:54.868605    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:36:54.868615    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:36:54.883367    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:36:54.883378    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:36:54.887798    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:36:54.887806    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:36:54.922372    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:36:54.922383    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:36:54.936467    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:36:54.936477    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:36:54.948453    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:36:54.948469    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:36:54.985877    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:36:54.985887    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:36:55.000842    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:36:55.000852    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:36:55.015004    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:36:55.015014    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:36:55.026654    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:36:55.026665    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:36:55.037909    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:36:55.037920    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:36:55.075112    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:36:55.075122    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:36:57.590893    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:02.591776    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:02.591902    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:37:02.603535    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:37:02.603620    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:37:02.615007    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:37:02.615092    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:37:02.626656    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:37:02.626732    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:37:02.638302    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:37:02.638392    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:37:02.649525    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:37:02.649611    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:37:02.660869    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:37:02.660958    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:37:02.671432    4234 logs.go:276] 0 containers: []
	W0917 02:37:02.671445    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:37:02.671520    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:37:02.682424    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:37:02.682440    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:37:02.682445    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:37:02.695334    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:37:02.695345    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:37:02.716305    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:37:02.716317    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:37:02.730299    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:37:02.730310    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:37:02.768683    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:37:02.768692    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:37:02.787021    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:37:02.787042    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:37:02.832629    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:37:02.832643    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:37:02.857329    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:37:02.857345    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:37:02.873175    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:37:02.873186    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:37:02.878298    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:37:02.878309    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:37:02.894354    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:37:02.894370    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:37:02.907650    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:37:02.907666    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:37:02.933297    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:37:02.933310    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:37:02.973293    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:37:02.973306    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:37:02.988668    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:37:02.988681    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:37:03.029630    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:37:03.029643    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:37:03.045948    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:37:03.045961    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:37:05.560764    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:10.562907    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:10.563036    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:37:10.574710    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:37:10.574808    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:37:10.585738    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:37:10.585832    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:37:10.597031    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:37:10.597109    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:37:10.609313    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:37:10.609408    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:37:10.620787    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:37:10.620875    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:37:10.631347    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:37:10.631432    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:37:10.641700    4234 logs.go:276] 0 containers: []
	W0917 02:37:10.641714    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:37:10.641786    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:37:10.652386    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:37:10.652401    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:37:10.652406    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:37:10.676185    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:37:10.676194    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:37:10.712825    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:37:10.712834    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:37:10.717106    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:37:10.717112    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:37:10.751793    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:37:10.751805    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:37:10.766587    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:37:10.766596    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:37:10.791793    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:37:10.791809    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:37:10.803550    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:37:10.803562    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:37:10.814983    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:37:10.814994    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:37:10.829935    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:37:10.829949    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:37:10.847764    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:37:10.847777    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:37:10.859136    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:37:10.859147    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:37:10.873144    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:37:10.873157    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:37:10.888123    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:37:10.888137    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:37:10.926899    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:37:10.926913    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:37:10.941301    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:37:10.941312    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:37:10.955254    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:37:10.955264    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:37:13.474814    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:18.475254    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:18.475524    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:37:18.501330    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:37:18.501453    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:37:18.514723    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:37:18.514812    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:37:18.526482    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:37:18.526565    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:37:18.537168    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:37:18.537259    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:37:18.548607    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:37:18.548685    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:37:18.559104    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:37:18.559187    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:37:18.569481    4234 logs.go:276] 0 containers: []
	W0917 02:37:18.569496    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:37:18.569558    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:37:18.579596    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:37:18.579614    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:37:18.579620    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:37:18.594636    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:37:18.594647    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:37:18.610172    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:37:18.610182    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:37:18.627876    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:37:18.627886    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:37:18.632147    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:37:18.632157    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:37:18.645742    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:37:18.645753    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:37:18.659029    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:37:18.659040    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:37:18.672988    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:37:18.672998    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:37:18.684500    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:37:18.684510    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:37:18.699205    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:37:18.699215    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:37:18.710798    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:37:18.710809    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:37:18.728996    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:37:18.729008    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:37:18.740899    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:37:18.740913    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:37:18.777833    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:37:18.777842    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:37:18.813235    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:37:18.813248    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:37:18.853246    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:37:18.853271    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:37:18.872571    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:37:18.872585    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:37:21.400375    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:26.402679    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:26.402829    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:37:26.414322    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:37:26.414408    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:37:26.431854    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:37:26.431953    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:37:26.448689    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:37:26.448778    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:37:26.460489    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:37:26.460576    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:37:26.478669    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:37:26.478759    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:37:26.491205    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:37:26.491294    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:37:26.502415    4234 logs.go:276] 0 containers: []
	W0917 02:37:26.502430    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:37:26.502511    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:37:26.514315    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:37:26.514333    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:37:26.514339    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:37:26.552675    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:37:26.552688    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:37:26.566396    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:37:26.566405    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:37:26.582118    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:37:26.582132    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:37:26.596269    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:37:26.596282    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:37:26.607746    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:37:26.607756    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:37:26.632247    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:37:26.632260    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:37:26.668582    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:37:26.668596    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:37:26.673072    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:37:26.673079    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:37:26.707993    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:37:26.708004    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:37:26.731677    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:37:26.731690    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:37:26.746639    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:37:26.746649    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:37:26.761210    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:37:26.761220    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:37:26.774134    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:37:26.774147    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:37:26.790101    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:37:26.790111    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:37:26.807533    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:37:26.807542    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:37:26.819672    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:37:26.819684    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:37:29.333637    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:34.335850    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:34.336127    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:37:34.358512    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:37:34.358630    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:37:34.375001    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:37:34.375096    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:37:34.387431    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:37:34.387522    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:37:34.398228    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:37:34.398317    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:37:34.408735    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:37:34.408817    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:37:34.423117    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:37:34.423192    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:37:34.434472    4234 logs.go:276] 0 containers: []
	W0917 02:37:34.434483    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:37:34.434548    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:37:34.445304    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:37:34.445322    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:37:34.445328    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:37:34.483252    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:37:34.483263    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:37:34.501718    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:37:34.501727    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:37:34.525311    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:37:34.525319    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:37:34.539582    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:37:34.539592    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:37:34.553857    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:37:34.553871    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:37:34.570913    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:37:34.570923    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:37:34.591165    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:37:34.591180    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:37:34.602140    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:37:34.602151    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:37:34.636482    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:37:34.636499    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:37:34.647603    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:37:34.647615    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:37:34.659871    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:37:34.659884    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:37:34.671813    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:37:34.671828    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:37:34.676280    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:37:34.676286    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:37:34.713945    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:37:34.713955    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:37:34.739427    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:37:34.739437    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:37:34.754880    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:37:34.754891    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:37:37.269710    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:42.272478    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:42.272564    4234 kubeadm.go:597] duration metric: took 4m4.598357583s to restartPrimaryControlPlane
	W0917 02:37:42.272621    4234 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 02:37:42.272650    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0917 02:37:43.256734    4234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:37:43.261654    4234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 02:37:43.264574    4234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 02:37:43.267301    4234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 02:37:43.267309    4234 kubeadm.go:157] found existing configuration files:
	
	I0917 02:37:43.267343    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf
	I0917 02:37:43.270680    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 02:37:43.270715    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 02:37:43.273969    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf
	I0917 02:37:43.276457    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 02:37:43.276484    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 02:37:43.279238    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf
	I0917 02:37:43.282313    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 02:37:43.282340    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 02:37:43.285494    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf
	I0917 02:37:43.288091    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 02:37:43.288113    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 02:37:43.291118    4234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 02:37:43.311263    4234 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0917 02:37:43.311303    4234 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 02:37:43.365653    4234 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 02:37:43.365712    4234 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 02:37:43.365799    4234 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 02:37:43.416175    4234 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 02:37:43.420329    4234 out.go:235]   - Generating certificates and keys ...
	I0917 02:37:43.420369    4234 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 02:37:43.420404    4234 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 02:37:43.420440    4234 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 02:37:43.420475    4234 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 02:37:43.420520    4234 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 02:37:43.420604    4234 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 02:37:43.420679    4234 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 02:37:43.420757    4234 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 02:37:43.420799    4234 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 02:37:43.420838    4234 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 02:37:43.420861    4234 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 02:37:43.420898    4234 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 02:37:43.475379    4234 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 02:37:43.508482    4234 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 02:37:43.548221    4234 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 02:37:43.607939    4234 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 02:37:43.647552    4234 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 02:37:43.647970    4234 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 02:37:43.648013    4234 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 02:37:43.732754    4234 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 02:37:43.739879    4234 out.go:235]   - Booting up control plane ...
	I0917 02:37:43.739930    4234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 02:37:43.739979    4234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 02:37:43.740017    4234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 02:37:43.740060    4234 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 02:37:43.740144    4234 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 02:37:48.235630    4234 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503029 seconds
	I0917 02:37:48.235752    4234 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 02:37:48.240355    4234 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 02:37:48.759541    4234 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 02:37:48.759840    4234 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-202000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 02:37:49.264006    4234 kubeadm.go:310] [bootstrap-token] Using token: 7pag7d.3y4wox6ghhmt7q13
	I0917 02:37:49.270412    4234 out.go:235]   - Configuring RBAC rules ...
	I0917 02:37:49.270475    4234 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 02:37:49.270533    4234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 02:37:49.273899    4234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 02:37:49.274871    4234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 02:37:49.275786    4234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 02:37:49.276646    4234 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 02:37:49.279989    4234 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 02:37:49.441465    4234 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 02:37:49.670371    4234 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 02:37:49.670780    4234 kubeadm.go:310] 
	I0917 02:37:49.670810    4234 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 02:37:49.670813    4234 kubeadm.go:310] 
	I0917 02:37:49.670851    4234 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 02:37:49.670856    4234 kubeadm.go:310] 
	I0917 02:37:49.670870    4234 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 02:37:49.670903    4234 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 02:37:49.670927    4234 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 02:37:49.670933    4234 kubeadm.go:310] 
	I0917 02:37:49.670997    4234 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 02:37:49.671015    4234 kubeadm.go:310] 
	I0917 02:37:49.671043    4234 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 02:37:49.671048    4234 kubeadm.go:310] 
	I0917 02:37:49.671075    4234 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 02:37:49.671116    4234 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 02:37:49.671157    4234 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 02:37:49.671162    4234 kubeadm.go:310] 
	I0917 02:37:49.671206    4234 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 02:37:49.671247    4234 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 02:37:49.671250    4234 kubeadm.go:310] 
	I0917 02:37:49.671297    4234 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7pag7d.3y4wox6ghhmt7q13 \
	I0917 02:37:49.671358    4234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3105cdadd1e1eaa420c61face26906cf5212dd9c9efeb8ef9725bc0a50fd268d \
	I0917 02:37:49.671376    4234 kubeadm.go:310] 	--control-plane 
	I0917 02:37:49.671379    4234 kubeadm.go:310] 
	I0917 02:37:49.671425    4234 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 02:37:49.671429    4234 kubeadm.go:310] 
	I0917 02:37:49.671472    4234 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7pag7d.3y4wox6ghhmt7q13 \
	I0917 02:37:49.671532    4234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3105cdadd1e1eaa420c61face26906cf5212dd9c9efeb8ef9725bc0a50fd268d 
	I0917 02:37:49.671605    4234 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 02:37:49.671613    4234 cni.go:84] Creating CNI manager for ""
	I0917 02:37:49.671621    4234 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:37:49.679335    4234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 02:37:49.682400    4234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 02:37:49.685731    4234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 02:37:49.690456    4234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 02:37:49.690501    4234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 02:37:49.690527    4234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-202000 minikube.k8s.io/updated_at=2024_09_17T02_37_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=running-upgrade-202000 minikube.k8s.io/primary=true
	I0917 02:37:49.728069    4234 kubeadm.go:1113] duration metric: took 37.606292ms to wait for elevateKubeSystemPrivileges
	I0917 02:37:49.728078    4234 ops.go:34] apiserver oom_adj: -16
	I0917 02:37:49.729831    4234 kubeadm.go:394] duration metric: took 4m12.0692795s to StartCluster
	I0917 02:37:49.729846    4234 settings.go:142] acquiring lock: {Name:mk2d861f3b7e502753ec34b4d96136a66d57e5dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:49.729938    4234 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:37:49.730314    4234 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/kubeconfig: {Name:mkb79e559d17024b096623143f764244ebf5b237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:49.730541    4234 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:37:49.730585    4234 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:37:49.730616    4234 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-202000"
	I0917 02:37:49.730624    4234 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-202000"
	I0917 02:37:49.730625    4234 config.go:182] Loaded profile config "running-upgrade-202000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	W0917 02:37:49.730628    4234 addons.go:243] addon storage-provisioner should already be in state true
	I0917 02:37:49.730638    4234 host.go:66] Checking if "running-upgrade-202000" exists ...
	I0917 02:37:49.730675    4234 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-202000"
	I0917 02:37:49.730684    4234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-202000"
	I0917 02:37:49.730940    4234 retry.go:31] will retry after 631.331049ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/running-upgrade-202000/monitor: connect: connection refused
	I0917 02:37:49.731661    4234 kapi.go:59] client config for running-upgrade-202000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106385800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:37:49.731782    4234 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-202000"
	W0917 02:37:49.731787    4234 addons.go:243] addon default-storageclass should already be in state true
	I0917 02:37:49.731793    4234 host.go:66] Checking if "running-upgrade-202000" exists ...
	I0917 02:37:49.732320    4234 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 02:37:49.732326    4234 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 02:37:49.732332    4234 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/running-upgrade-202000/id_rsa Username:docker}
	I0917 02:37:49.734300    4234 out.go:177] * Verifying Kubernetes components...
	I0917 02:37:49.742167    4234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:49.833069    4234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:37:49.837864    4234 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:37:49.837918    4234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:37:49.840335    4234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 02:37:49.843083    4234 api_server.go:72] duration metric: took 112.53125ms to wait for apiserver process to appear ...
	I0917 02:37:49.843094    4234 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:37:49.843101    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:50.145995    4234 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 02:37:50.146006    4234 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 02:37:50.368094    4234 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:50.372199    4234 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 02:37:50.372207    4234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 02:37:50.372218    4234 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/running-upgrade-202000/id_rsa Username:docker}
	I0917 02:37:50.402086    4234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 02:37:54.844907    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:54.844969    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:59.843536    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:59.843568    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:04.842517    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:04.842566    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:09.842073    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:09.842091    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:14.842011    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:14.842092    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:19.842820    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:19.842884    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0917 02:38:20.142031    4234 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0917 02:38:20.146864    4234 out.go:177] * Enabled addons: storage-provisioner
	I0917 02:38:20.153786    4234 addons.go:510] duration metric: took 30.429468292s for enable addons: enabled=[storage-provisioner]
	I0917 02:38:24.843903    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:24.843999    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:29.845671    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:29.845722    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:34.847451    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:34.847536    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:39.849944    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:39.849969    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:44.851932    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:44.852048    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:49.854456    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:49.854563    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:49.879590    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:38:49.879688    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:49.900125    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:38:49.900214    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:49.911202    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:38:49.911294    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:49.921866    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:38:49.921947    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:49.934701    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:38:49.934790    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:49.945126    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:38:49.945197    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:49.955919    4234 logs.go:276] 0 containers: []
	W0917 02:38:49.955933    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:49.956002    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:49.967975    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:38:49.967988    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:49.967993    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:50.006883    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:38:50.006894    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:38:50.025643    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:38:50.025652    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:38:50.038642    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:38:50.038653    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:38:50.060410    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:38:50.060421    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:50.073452    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:50.073465    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:50.107104    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:50.107116    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:50.111725    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:38:50.111734    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:38:50.123207    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:38:50.123219    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:38:50.137814    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:38:50.137823    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:38:50.149511    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:50.149526    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:50.174375    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:38:50.174385    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:38:50.188985    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:38:50.188996    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:38:52.712181    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:57.714364    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:57.714462    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:57.726383    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:38:57.726478    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:57.737750    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:38:57.737836    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:57.750630    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:38:57.750722    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:57.762560    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:38:57.762645    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:57.777148    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:38:57.777238    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:57.795473    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:38:57.795554    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:57.810182    4234 logs.go:276] 0 containers: []
	W0917 02:38:57.810194    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:57.810273    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:57.821229    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:38:57.821247    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:57.821252    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:57.825821    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:57.825830    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:57.859303    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:57.859315    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:57.884656    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:38:57.884667    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:38:57.896790    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:38:57.896806    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:38:57.914428    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:57.914438    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:57.950741    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:38:57.950758    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:38:57.967918    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:38:57.967933    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:38:57.981878    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:38:57.981893    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:38:57.993228    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:38:57.993244    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:38:58.005109    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:38:58.005125    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:38:58.019507    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:38:58.019521    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:38:58.031846    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:38:58.031858    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:00.544879    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:05.547077    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:05.547193    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:05.559818    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:05.559911    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:05.571625    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:05.571717    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:05.583725    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:05.583816    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:05.599657    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:05.599751    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:05.611416    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:05.611509    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:05.623590    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:05.623666    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:05.635805    4234 logs.go:276] 0 containers: []
	W0917 02:39:05.635815    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:05.635893    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:05.647388    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:05.647403    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:05.647409    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:05.660270    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:05.660283    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:05.685613    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:05.685623    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:05.697821    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:05.697832    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:05.733746    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:05.733764    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:05.738260    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:05.738267    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:05.772570    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:05.772584    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:05.784895    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:05.784907    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:05.796765    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:05.796780    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:05.811700    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:05.811710    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:05.826060    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:05.826074    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:05.837611    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:05.837624    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:05.855140    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:05.855149    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:08.374515    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:13.376754    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:13.376860    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:13.388792    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:13.388885    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:13.400119    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:13.400200    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:13.411320    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:13.411408    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:13.423202    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:13.423291    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:13.434925    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:13.435017    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:13.446445    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:13.446533    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:13.457908    4234 logs.go:276] 0 containers: []
	W0917 02:39:13.457921    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:13.458003    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:13.469375    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:13.469392    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:13.469398    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:13.484485    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:13.484496    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:13.497628    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:13.497639    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:13.510290    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:13.510302    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:13.531700    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:13.531717    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:13.568369    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:13.568389    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:13.573434    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:13.573448    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:13.615085    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:13.615095    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:13.631171    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:13.631181    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:13.656047    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:13.656060    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:13.668401    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:13.668414    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:13.681398    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:13.681409    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:13.699894    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:13.699908    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:16.213358    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:21.215510    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:21.215623    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:21.227149    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:21.227241    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:21.238483    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:21.238580    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:21.250669    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:21.250719    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:21.261695    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:21.261780    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:21.273998    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:21.274093    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:21.291231    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:21.291320    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:21.302430    4234 logs.go:276] 0 containers: []
	W0917 02:39:21.302441    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:21.302518    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:21.313696    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:21.313713    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:21.313719    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:21.326534    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:21.326547    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:21.343368    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:21.343378    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:21.355708    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:21.355720    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:21.372300    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:21.372312    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:21.409327    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:21.409337    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:21.424957    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:21.424971    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:21.440466    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:21.440473    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:21.453291    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:21.453301    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:21.479750    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:21.479771    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:21.492743    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:21.492758    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:21.498433    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:21.498442    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:21.538223    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:21.538234    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:24.058489    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:29.060844    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:29.061045    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:29.072708    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:29.072805    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:29.083529    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:29.083617    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:29.094224    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:29.094306    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:29.105075    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:29.105278    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:29.115877    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:29.115957    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:29.127802    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:29.127902    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:29.142555    4234 logs.go:276] 0 containers: []
	W0917 02:39:29.142566    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:29.142639    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:29.158001    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:29.158011    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:29.158016    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:29.194453    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:29.194472    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:29.199488    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:29.199499    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:29.237175    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:29.237188    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:29.251436    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:29.251451    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:29.266381    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:29.266390    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:29.281141    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:29.281158    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:29.293779    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:29.293796    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:29.309100    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:29.309115    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:29.321376    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:29.321389    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:29.344237    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:29.344252    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:29.357385    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:29.357396    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:29.383844    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:29.383854    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:31.898895    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:36.901229    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:36.901498    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:36.920306    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:36.920417    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:36.936166    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:36.936261    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:36.947259    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:36.947341    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:36.958104    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:36.958179    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:36.969249    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:36.969337    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:36.980147    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:36.980237    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:36.990358    4234 logs.go:276] 0 containers: []
	W0917 02:39:36.990370    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:36.990445    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:37.001039    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:37.001054    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:37.001060    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:37.035840    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:37.035853    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:37.041119    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:37.041127    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:37.056662    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:37.056675    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:37.072209    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:37.072226    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:37.085182    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:37.085194    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:37.123969    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:37.123981    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:37.140450    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:37.140462    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:37.155883    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:37.155897    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:37.168721    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:37.168734    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:37.186621    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:37.186632    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:37.199223    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:37.199239    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:37.225695    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:37.225706    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:39.739980    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:44.742231    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:44.742410    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:44.754546    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:44.754637    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:44.765093    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:44.765177    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:44.775976    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:44.776065    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:44.786524    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:44.786610    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:44.796965    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:44.797041    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:44.808466    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:44.808539    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:44.818583    4234 logs.go:276] 0 containers: []
	W0917 02:39:44.818597    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:44.818672    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:44.828512    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:44.828527    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:44.828533    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:44.852442    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:44.852450    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:44.861082    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:44.861089    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:44.877665    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:44.877678    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:44.888865    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:44.888876    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:44.902564    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:44.902578    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:44.914252    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:44.914266    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:44.934008    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:44.934020    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:44.945514    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:44.945524    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:44.964317    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:44.964391    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:45.003161    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:45.003185    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:45.044785    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:45.044798    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:45.060381    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:45.060398    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:47.573663    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:52.575888    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:52.576028    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:52.588105    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:52.588203    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:52.599241    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:52.599332    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:52.614674    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:52.614760    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:52.625313    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:52.625401    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:52.635996    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:52.636079    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:52.646441    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:52.646528    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:52.658034    4234 logs.go:276] 0 containers: []
	W0917 02:39:52.658047    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:52.658122    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:52.668716    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:52.668731    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:52.668737    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:52.680869    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:52.680881    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:52.698614    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:52.698628    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:52.709881    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:52.709892    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:52.721291    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:52.721304    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:52.725789    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:52.725796    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:52.739628    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:52.739642    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:52.754801    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:52.754812    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:52.765850    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:52.765863    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:52.780076    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:52.780092    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:52.792065    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:52.792077    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:52.817263    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:52.817271    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:52.853600    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:52.853611    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:55.435607    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:00.436940    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:00.437184    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:00.459816    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:00.459944    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:00.476177    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:00.476275    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:00.490318    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:40:00.490406    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:00.501365    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:00.501441    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:00.513230    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:00.513313    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:00.526248    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:00.526327    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:00.541069    4234 logs.go:276] 0 containers: []
	W0917 02:40:00.541080    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:00.541144    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:00.552071    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:00.552084    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:00.552089    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:00.566329    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:00.566342    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:00.578521    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:00.578535    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:00.592929    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:00.592938    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:00.620566    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:00.620583    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:00.658106    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:00.658124    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:00.734568    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:00.734583    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:00.761718    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:00.761731    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:00.784282    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:00.784291    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:00.809508    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:00.809519    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:00.823593    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:00.823603    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:00.836586    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:00.836601    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:00.841794    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:00.841806    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:03.360571    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:08.362907    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:08.363383    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:08.395685    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:08.395821    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:08.418049    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:08.418142    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:08.430714    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:08.430800    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:08.446145    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:08.446224    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:08.456952    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:08.457030    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:08.468233    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:08.468308    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:08.478657    4234 logs.go:276] 0 containers: []
	W0917 02:40:08.478671    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:08.478737    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:08.489444    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:08.489460    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:08.489466    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:08.500805    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:08.500815    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:08.515823    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:08.515832    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:08.541882    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:08.541890    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:08.556556    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:08.556567    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:08.594463    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:08.594474    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:08.609459    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:08.609472    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:08.621108    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:08.621119    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:08.655863    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:08.655871    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:08.660758    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:08.660766    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:08.675712    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:08.675722    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:08.689880    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:08.689894    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:08.702759    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:08.702772    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:08.716262    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:08.716274    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:08.735780    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:08.735789    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:11.251002    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:16.253445    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:16.253755    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:16.279498    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:16.279626    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:16.296191    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:16.296298    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:16.309521    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:16.309618    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:16.320661    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:16.320733    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:16.331353    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:16.331440    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:16.342631    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:16.342712    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:16.352464    4234 logs.go:276] 0 containers: []
	W0917 02:40:16.352477    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:16.352540    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:16.363102    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:16.363119    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:16.363124    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:16.375263    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:16.375274    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:16.390687    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:16.390698    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:16.402610    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:16.402620    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:16.414559    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:16.414576    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:16.426314    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:16.426328    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:16.443512    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:16.443525    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:16.469430    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:16.469444    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:16.483602    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:16.483615    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:16.502019    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:16.502028    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:16.538340    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:16.538359    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:16.543315    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:16.543322    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:16.579103    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:16.579116    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:16.593146    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:16.593156    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:16.605411    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:16.605422    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:19.119415    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:24.121774    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:24.122045    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:24.145572    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:24.145709    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:24.161709    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:24.161802    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:24.173993    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:24.174083    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:24.185004    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:24.185082    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:24.195973    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:24.196058    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:24.211170    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:24.211243    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:24.225622    4234 logs.go:276] 0 containers: []
	W0917 02:40:24.225635    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:24.225711    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:24.236510    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:24.236527    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:24.236534    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:24.248172    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:24.248187    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:24.282455    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:24.282465    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:24.294062    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:24.294075    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:24.305838    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:24.305848    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:24.310156    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:24.310164    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:24.324548    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:24.324558    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:24.338373    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:24.338384    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:24.350793    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:24.350803    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:24.368466    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:24.368477    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:24.392820    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:24.392828    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:24.407645    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:24.407656    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:24.419367    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:24.419377    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:24.455046    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:24.455057    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:24.469213    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:24.469227    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:26.983042    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:31.985337    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:31.985511    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:31.996126    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:31.996217    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:32.006637    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:32.006726    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:32.017110    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:32.017196    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:32.028298    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:32.028386    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:32.039777    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:32.039858    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:32.050662    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:32.050752    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:32.064007    4234 logs.go:276] 0 containers: []
	W0917 02:40:32.064019    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:32.064084    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:32.083897    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:32.083915    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:32.083920    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:32.109248    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:32.109256    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:32.145535    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:32.145546    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:32.159233    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:32.159244    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:32.170996    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:32.171005    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:32.182507    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:32.182516    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:32.196471    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:32.196481    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:32.207718    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:32.207727    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:32.222250    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:32.222260    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:32.233830    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:32.233839    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:32.238325    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:32.238333    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:32.268461    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:32.268480    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:32.286701    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:32.286712    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:32.301857    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:32.301866    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:32.337027    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:32.337043    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:34.851581    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:39.853905    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:39.854153    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:39.877789    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:39.877902    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:39.892270    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:39.892366    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:39.905246    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:39.905331    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:39.922054    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:39.922136    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:39.935426    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:39.935511    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:39.946353    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:39.946444    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:39.961233    4234 logs.go:276] 0 containers: []
	W0917 02:40:39.961244    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:39.961317    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:39.972079    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:39.972098    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:39.972107    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:40.012880    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:40.012891    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:40.037017    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:40.037026    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:40.048316    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:40.048328    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:40.082540    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:40.082550    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:40.101398    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:40.101408    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:40.113241    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:40.113252    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:40.125258    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:40.125268    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:40.129848    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:40.129857    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:40.142251    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:40.142261    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:40.162970    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:40.162983    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:40.180084    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:40.180094    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:40.201854    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:40.201868    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:40.213067    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:40.213077    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:40.225123    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:40.225137    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:42.738980    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:47.741419    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:47.741656    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:47.764112    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:47.764220    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:47.778759    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:47.778864    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:47.791626    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:47.791709    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:47.802745    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:47.802831    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:47.814766    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:47.814851    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:47.838956    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:47.839045    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:47.850568    4234 logs.go:276] 0 containers: []
	W0917 02:40:47.850579    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:47.850643    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:47.863345    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:47.863363    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:47.863368    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:47.875315    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:47.875324    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:47.887080    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:47.887091    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:47.923086    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:47.923099    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:47.937428    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:47.937439    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:47.949365    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:47.949378    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:47.963648    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:47.963662    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:47.968651    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:47.968658    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:47.980799    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:47.980811    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:48.005027    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:48.005043    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:48.017790    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:48.017807    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:48.054897    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:48.054906    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:48.073934    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:48.073945    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:48.085406    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:48.085421    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:48.109617    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:48.109626    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:50.631785    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:55.634119    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:55.634417    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:55.656814    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:55.656954    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:55.672413    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:55.672497    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:55.684937    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:55.685026    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:55.696213    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:55.696304    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:55.714710    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:55.714788    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:55.726176    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:55.726250    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:55.736961    4234 logs.go:276] 0 containers: []
	W0917 02:40:55.736972    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:55.737041    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:55.747809    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:55.747833    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:55.747839    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:55.752800    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:55.752809    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:55.789320    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:55.789332    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:55.801188    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:55.801200    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:55.816859    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:55.816869    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:55.829201    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:55.829214    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:55.845165    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:55.845176    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:55.863310    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:55.863325    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:55.874978    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:55.874989    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:55.910920    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:55.910928    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:55.924831    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:55.924840    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:55.937535    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:55.937547    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:55.962710    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:55.962718    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:55.974350    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:55.974359    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:55.986246    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:55.986256    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:58.512415    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:03.514723    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:03.514935    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:03.532155    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:03.532241    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:03.543730    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:03.543825    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:03.554455    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:03.554530    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:03.564849    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:03.564936    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:03.575238    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:03.575324    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:03.585564    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:03.585651    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:03.595144    4234 logs.go:276] 0 containers: []
	W0917 02:41:03.595155    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:03.595220    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:03.605662    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:03.605680    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:03.605685    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:03.641522    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:03.641531    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:03.665985    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:03.665996    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:03.677579    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:03.677591    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:03.689674    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:03.689689    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:03.726063    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:03.726077    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:03.741001    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:03.741014    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:03.753015    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:03.753026    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:03.766507    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:03.766517    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:03.791026    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:03.791041    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:03.808883    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:03.808897    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:03.821059    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:03.821075    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:03.825533    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:03.825539    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:03.839426    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:03.839439    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:03.856008    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:03.856021    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:06.370305    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:11.372914    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:11.373181    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:11.395443    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:11.395578    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:11.414366    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:11.414460    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:11.426550    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:11.426645    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:11.436892    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:11.436980    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:11.447620    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:11.447710    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:11.458699    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:11.458767    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:11.469171    4234 logs.go:276] 0 containers: []
	W0917 02:41:11.469186    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:11.469254    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:11.480192    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:11.480209    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:11.480215    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:11.492678    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:11.492688    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:11.528995    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:11.529013    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:11.541540    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:11.541555    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:11.556907    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:11.556916    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:11.568767    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:11.568778    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:11.586016    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:11.586026    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:11.590685    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:11.590691    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:11.602199    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:11.602209    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:11.613869    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:11.613879    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:11.625438    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:11.625452    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:11.643810    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:11.643820    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:11.657772    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:11.657782    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:11.681272    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:11.681280    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:11.717971    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:11.717981    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:14.231940    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:19.234263    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:19.234422    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:19.246390    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:19.246474    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:19.256841    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:19.256933    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:19.269450    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:19.269545    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:19.280458    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:19.280535    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:19.290763    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:19.290846    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:19.307659    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:19.307746    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:19.323374    4234 logs.go:276] 0 containers: []
	W0917 02:41:19.323386    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:19.323463    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:19.335042    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:19.335060    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:19.335066    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:19.354854    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:19.354874    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:19.367405    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:19.367418    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:19.404784    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:19.404797    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:19.441717    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:19.441730    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:19.456601    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:19.456620    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:19.470394    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:19.470409    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:19.486748    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:19.486765    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:19.491610    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:19.491621    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:19.506819    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:19.506838    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:19.533398    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:19.533420    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:19.546218    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:19.546230    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:19.559944    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:19.559958    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:19.573494    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:19.573509    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:19.586598    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:19.586615    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:22.101800    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:27.104161    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:27.104343    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:27.116694    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:27.116786    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:27.128061    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:27.128151    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:27.138730    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:27.138820    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:27.149848    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:27.149929    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:27.160478    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:27.160561    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:27.171614    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:27.171693    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:27.182714    4234 logs.go:276] 0 containers: []
	W0917 02:41:27.182725    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:27.182800    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:27.193376    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:27.193396    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:27.193401    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:27.209921    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:27.209932    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:27.224594    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:27.224604    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:27.235810    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:27.235821    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:27.248702    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:27.248718    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:27.266423    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:27.266432    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:27.270865    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:27.270874    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:27.285251    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:27.285260    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:27.299612    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:27.299625    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:27.313029    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:27.313037    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:27.338192    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:27.338200    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:27.373498    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:27.373507    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:27.409881    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:27.409892    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:27.421815    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:27.421825    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:27.436979    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:27.436992    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:29.951096    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:34.953306    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:34.953452    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:34.964528    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:34.964600    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:34.975094    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:34.975181    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:34.986566    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:34.986650    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:34.997153    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:34.997232    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:35.008109    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:35.008200    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:35.019176    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:35.019246    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:35.029388    4234 logs.go:276] 0 containers: []
	W0917 02:41:35.029400    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:35.029472    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:35.044365    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:35.044383    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:35.044389    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:35.049523    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:35.049531    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:35.061222    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:35.061231    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:35.073198    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:35.073207    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:35.088711    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:35.088729    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:35.101414    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:35.101425    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:35.120064    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:35.120073    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:35.132242    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:35.132256    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:35.143938    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:35.143950    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:35.155784    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:35.155794    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:35.191556    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:35.191572    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:35.228757    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:35.228769    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:35.245177    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:35.245189    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:35.263593    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:35.263604    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:35.274872    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:35.274884    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:37.800616    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:42.802085    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:42.802286    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:42.823495    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:42.823610    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:42.838867    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:42.838958    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:42.850913    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:42.851018    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:42.861619    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:42.861690    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:42.872323    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:42.872400    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:42.883763    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:42.883843    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:42.894196    4234 logs.go:276] 0 containers: []
	W0917 02:41:42.894206    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:42.894266    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:42.904456    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:42.904471    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:42.904478    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:42.916812    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:42.916821    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:42.941939    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:42.941953    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:42.954633    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:42.954648    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:42.966462    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:42.966471    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:42.978985    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:42.978996    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:42.999287    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:42.999298    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:43.004373    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:43.004380    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:43.040867    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:43.040877    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:43.059267    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:43.059277    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:43.094412    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:43.094422    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:43.109765    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:43.109775    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:43.123735    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:43.123745    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:43.136533    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:43.136544    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:43.148990    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:43.149002    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:45.672122    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:50.672317    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:50.676038    4234 out.go:201] 
	W0917 02:41:50.678782    4234 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0917 02:41:50.678791    4234 out.go:270] * 
	* 
	W0917 02:41:50.679524    4234 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:41:50.690835    4234 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-202000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-17 02:41:50.792261 -0700 PDT m=+3866.087929626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-202000 -n running-upgrade-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-202000 -n running-upgrade-202000: exit status 2 (15.5933665s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-202000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-446000          | force-systemd-flag-446000 | jenkins | v1.34.0 | 17 Sep 24 02:31 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-154000              | force-systemd-env-154000  | jenkins | v1.34.0 | 17 Sep 24 02:31 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-154000           | force-systemd-env-154000  | jenkins | v1.34.0 | 17 Sep 24 02:31 PDT | 17 Sep 24 02:31 PDT |
	| start   | -p docker-flags-296000                | docker-flags-296000       | jenkins | v1.34.0 | 17 Sep 24 02:31 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-446000             | force-systemd-flag-446000 | jenkins | v1.34.0 | 17 Sep 24 02:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-446000          | force-systemd-flag-446000 | jenkins | v1.34.0 | 17 Sep 24 02:32 PDT | 17 Sep 24 02:32 PDT |
	| start   | -p cert-expiration-340000             | cert-expiration-340000    | jenkins | v1.34.0 | 17 Sep 24 02:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-296000 ssh               | docker-flags-296000       | jenkins | v1.34.0 | 17 Sep 24 02:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-296000 ssh               | docker-flags-296000       | jenkins | v1.34.0 | 17 Sep 24 02:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-296000                | docker-flags-296000       | jenkins | v1.34.0 | 17 Sep 24 02:32 PDT | 17 Sep 24 02:32 PDT |
	| start   | -p cert-options-453000                | cert-options-453000       | jenkins | v1.34.0 | 17 Sep 24 02:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-453000 ssh               | cert-options-453000       | jenkins | v1.34.0 | 17 Sep 24 02:32 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-453000 -- sudo        | cert-options-453000       | jenkins | v1.34.0 | 17 Sep 24 02:32 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-453000                | cert-options-453000       | jenkins | v1.34.0 | 17 Sep 24 02:32 PDT | 17 Sep 24 02:32 PDT |
	| start   | -p running-upgrade-202000             | minikube                  | jenkins | v1.26.0 | 17 Sep 24 02:32 PDT | 17 Sep 24 02:33 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-202000             | running-upgrade-202000    | jenkins | v1.34.0 | 17 Sep 24 02:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-340000             | cert-expiration-340000    | jenkins | v1.34.0 | 17 Sep 24 02:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-340000             | cert-expiration-340000    | jenkins | v1.34.0 | 17 Sep 24 02:35 PDT | 17 Sep 24 02:35 PDT |
	| start   | -p kubernetes-upgrade-685000          | kubernetes-upgrade-685000 | jenkins | v1.34.0 | 17 Sep 24 02:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-685000          | kubernetes-upgrade-685000 | jenkins | v1.34.0 | 17 Sep 24 02:35 PDT | 17 Sep 24 02:35 PDT |
	| start   | -p kubernetes-upgrade-685000          | kubernetes-upgrade-685000 | jenkins | v1.34.0 | 17 Sep 24 02:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-685000          | kubernetes-upgrade-685000 | jenkins | v1.34.0 | 17 Sep 24 02:35 PDT | 17 Sep 24 02:35 PDT |
	| start   | -p stopped-upgrade-288000             | minikube                  | jenkins | v1.26.0 | 17 Sep 24 02:35 PDT | 17 Sep 24 02:36 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-288000 stop           | minikube                  | jenkins | v1.26.0 | 17 Sep 24 02:36 PDT | 17 Sep 24 02:36 PDT |
	| start   | -p stopped-upgrade-288000             | stopped-upgrade-288000    | jenkins | v1.34.0 | 17 Sep 24 02:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 02:36:39
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 02:36:39.285186    4370 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:36:39.285339    4370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:36:39.285342    4370 out.go:358] Setting ErrFile to fd 2...
	I0917 02:36:39.285345    4370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:36:39.285464    4370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:36:39.286509    4370 out.go:352] Setting JSON to false
	I0917 02:36:39.303429    4370 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3969,"bootTime":1726561830,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:36:39.303492    4370 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:36:39.308687    4370 out.go:177] * [stopped-upgrade-288000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:36:39.316758    4370 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:36:39.316815    4370 notify.go:220] Checking for updates...
	I0917 02:36:39.325609    4370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:36:39.328603    4370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:36:39.331618    4370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:36:39.334678    4370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:36:39.335931    4370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:36:39.338928    4370 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:36:39.342649    4370 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 02:36:39.345647    4370 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:36:39.349587    4370 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:36:39.356646    4370 start.go:297] selected driver: qemu2
	I0917 02:36:39.356651    4370 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 02:36:39.356694    4370 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:36:39.359061    4370 cni.go:84] Creating CNI manager for ""
	I0917 02:36:39.359089    4370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:36:39.359109    4370 start.go:340] cluster config:
	{Name:stopped-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 02:36:39.359156    4370 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:36:39.367659    4370 out.go:177] * Starting "stopped-upgrade-288000" primary control-plane node in "stopped-upgrade-288000" cluster
	I0917 02:36:39.371643    4370 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 02:36:39.371659    4370 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0917 02:36:39.371666    4370 cache.go:56] Caching tarball of preloaded images
	I0917 02:36:39.371729    4370 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:36:39.371736    4370 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0917 02:36:39.371792    4370 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/config.json ...
	I0917 02:36:39.372239    4370 start.go:360] acquireMachinesLock for stopped-upgrade-288000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:36:39.372272    4370 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "stopped-upgrade-288000"
	I0917 02:36:39.372280    4370 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:36:39.372286    4370 fix.go:54] fixHost starting: 
	I0917 02:36:39.372389    4370 fix.go:112] recreateIfNeeded on stopped-upgrade-288000: state=Stopped err=<nil>
	W0917 02:36:39.372398    4370 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:36:39.380567    4370 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-288000" ...
	I0917 02:36:38.864121    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:36:38.864226    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:36:38.882219    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:36:38.882312    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:36:38.893010    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:36:38.893129    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:36:38.903057    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:36:38.903146    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:36:38.913979    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:36:38.914066    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:36:38.924485    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:36:38.924590    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:36:38.935409    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:36:38.935484    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:36:38.945636    4234 logs.go:276] 0 containers: []
	W0917 02:36:38.945647    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:36:38.945722    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:36:38.956499    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:36:38.956518    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:36:38.956524    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:36:38.990584    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:36:38.990596    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:36:39.004555    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:36:39.004565    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:36:39.021954    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:36:39.021965    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:36:39.033357    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:36:39.033368    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:36:39.056327    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:36:39.056333    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:36:39.090960    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:36:39.090967    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:36:39.095476    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:36:39.095484    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:36:39.109572    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:36:39.109582    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:36:39.124156    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:36:39.124165    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:36:39.134952    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:36:39.134961    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:36:39.146824    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:36:39.146833    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:36:39.163407    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:36:39.163417    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:36:39.200721    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:36:39.200736    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:36:39.214888    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:36:39.214899    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:36:39.226570    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:36:39.226586    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:36:39.240664    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:36:39.240674    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:36:39.384644    4370 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:36:39.384718    4370 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50472-:22,hostfwd=tcp::50473-:2376,hostname=stopped-upgrade-288000 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/disk.qcow2
	I0917 02:36:39.430165    4370 main.go:141] libmachine: STDOUT: 
	I0917 02:36:39.430187    4370 main.go:141] libmachine: STDERR: 
	I0917 02:36:39.430195    4370 main.go:141] libmachine: Waiting for VM to start (ssh -p 50472 docker@127.0.0.1)...
	I0917 02:36:41.756151    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:36:46.758779    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:36:46.758900    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:36:46.770536    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:36:46.770625    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:36:46.784374    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:36:46.784465    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:36:46.795292    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:36:46.795374    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:36:46.806154    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:36:46.806235    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:36:46.817258    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:36:46.817335    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:36:46.833577    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:36:46.833656    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:36:46.844723    4234 logs.go:276] 0 containers: []
	W0917 02:36:46.844735    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:36:46.844798    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:36:46.855509    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:36:46.855528    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:36:46.855534    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:36:46.859983    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:36:46.859990    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:36:46.895811    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:36:46.895823    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:36:46.910129    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:36:46.910139    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:36:46.922667    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:36:46.922679    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:36:46.943746    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:36:46.943757    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:36:46.956585    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:36:46.956593    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:36:46.980069    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:36:46.980076    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:36:46.991751    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:36:46.991766    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:36:47.005870    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:36:47.005880    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:36:47.020776    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:36:47.020786    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:36:47.058767    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:36:47.058776    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:36:47.069843    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:36:47.069855    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:36:47.085749    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:36:47.085758    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:36:47.101686    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:36:47.101696    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:36:47.113176    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:36:47.113187    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:36:47.152217    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:36:47.152233    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:36:49.668395    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:36:54.670719    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:36:54.671030    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:36:54.696972    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:36:54.697137    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:36:54.719437    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:36:54.719532    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:36:54.732320    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:36:54.732392    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:36:54.742748    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:36:54.742841    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:36:54.753064    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:36:54.753146    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:36:54.763388    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:36:54.763470    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:36:54.778968    4234 logs.go:276] 0 containers: []
	W0917 02:36:54.778984    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:36:54.779060    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:36:54.790070    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:36:54.790088    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:36:54.790094    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:36:54.801553    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:36:54.801569    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:36:54.825446    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:36:54.825454    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:36:54.837252    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:36:54.837266    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:36:54.854388    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:36:54.854400    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:36:54.868605    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:36:54.868615    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:36:54.883367    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:36:54.883378    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:36:54.887798    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:36:54.887806    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:36:54.922372    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:36:54.922383    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:36:54.936467    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:36:54.936477    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:36:54.948453    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:36:54.948469    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:36:54.985877    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:36:54.985887    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:36:55.000842    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:36:55.000852    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:36:55.015004    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:36:55.015014    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:36:55.026654    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:36:55.026665    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:36:55.037909    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:36:55.037920    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:36:55.075112    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:36:55.075122    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:36:57.590893    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:36:59.808580    4370 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/config.json ...
	I0917 02:36:59.809578    4370 machine.go:93] provisionDockerMachine start ...
	I0917 02:36:59.809757    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:36:59.810192    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:36:59.810209    4370 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:36:59.884227    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:36:59.884252    4370 buildroot.go:166] provisioning hostname "stopped-upgrade-288000"
	I0917 02:36:59.884380    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:36:59.884609    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:36:59.884621    4370 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-288000 && echo "stopped-upgrade-288000" | sudo tee /etc/hostname
	I0917 02:36:59.956718    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-288000
	
	I0917 02:36:59.956776    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:36:59.956911    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:36:59.956924    4370 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-288000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-288000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-288000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:37:00.018325    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:37:00.018337    4370 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1056/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1056/.minikube}
	I0917 02:37:00.018346    4370 buildroot.go:174] setting up certificates
	I0917 02:37:00.018352    4370 provision.go:84] configureAuth start
	I0917 02:37:00.018356    4370 provision.go:143] copyHostCerts
	I0917 02:37:00.018446    4370 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1056/.minikube/key.pem, removing ...
	I0917 02:37:00.018454    4370 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1056/.minikube/key.pem
	I0917 02:37:00.018573    4370 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/key.pem (1675 bytes)
	I0917 02:37:00.018753    4370 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.pem, removing ...
	I0917 02:37:00.018758    4370 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.pem
	I0917 02:37:00.018814    4370 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.pem (1082 bytes)
	I0917 02:37:00.018934    4370 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1056/.minikube/cert.pem, removing ...
	I0917 02:37:00.018939    4370 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1056/.minikube/cert.pem
	I0917 02:37:00.018989    4370 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/cert.pem (1123 bytes)
	I0917 02:37:00.019101    4370 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-288000 san=[127.0.0.1 localhost minikube stopped-upgrade-288000]
	I0917 02:37:00.056391    4370 provision.go:177] copyRemoteCerts
	I0917 02:37:00.056423    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:37:00.056430    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	I0917 02:37:00.089075    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 02:37:00.095977    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 02:37:00.102728    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:37:00.110134    4370 provision.go:87] duration metric: took 91.774167ms to configureAuth
	I0917 02:37:00.110143    4370 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:37:00.110241    4370 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:37:00.110286    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:37:00.110376    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:37:00.110381    4370 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:37:00.167644    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:37:00.167652    4370 buildroot.go:70] root file system type: tmpfs
	I0917 02:37:00.167702    4370 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:37:00.167753    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:37:00.167872    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:37:00.167911    4370 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:37:00.232229    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:37:00.232291    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:37:00.232424    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:37:00.232435    4370 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:37:00.597237    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:37:00.597253    4370 machine.go:96] duration metric: took 787.66625ms to provisionDockerMachine
	I0917 02:37:00.597266    4370 start.go:293] postStartSetup for "stopped-upgrade-288000" (driver="qemu2")
	I0917 02:37:00.597272    4370 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:37:00.597339    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:37:00.597353    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	I0917 02:37:00.627001    4370 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:37:00.628279    4370 info.go:137] Remote host: Buildroot 2021.02.12
	I0917 02:37:00.628286    4370 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1056/.minikube/addons for local assets ...
	I0917 02:37:00.628388    4370 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1056/.minikube/files for local assets ...
	I0917 02:37:00.628518    4370 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem -> 15552.pem in /etc/ssl/certs
	I0917 02:37:00.628649    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:37:00.631068    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem --> /etc/ssl/certs/15552.pem (1708 bytes)
	I0917 02:37:00.638133    4370 start.go:296] duration metric: took 40.862875ms for postStartSetup
	I0917 02:37:00.638145    4370 fix.go:56] duration metric: took 21.265963708s for fixHost
	I0917 02:37:00.638178    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:37:00.638276    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:37:00.638281    4370 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:37:00.692849    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726565820.450382254
	
	I0917 02:37:00.692857    4370 fix.go:216] guest clock: 1726565820.450382254
	I0917 02:37:00.692861    4370 fix.go:229] Guest: 2024-09-17 02:37:00.450382254 -0700 PDT Remote: 2024-09-17 02:37:00.638147 -0700 PDT m=+21.372789251 (delta=-187.764746ms)
	I0917 02:37:00.692872    4370 fix.go:200] guest clock delta is within tolerance: -187.764746ms
	I0917 02:37:00.692875    4370 start.go:83] releasing machines lock for "stopped-upgrade-288000", held for 21.320700042s
	I0917 02:37:00.692944    4370 ssh_runner.go:195] Run: cat /version.json
	I0917 02:37:00.692957    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	I0917 02:37:00.692945    4370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:37:00.693025    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	W0917 02:37:00.693545    4370 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50472: connect: connection refused
	I0917 02:37:00.693567    4370 retry.go:31] will retry after 217.257254ms: dial tcp [::1]:50472: connect: connection refused
	W0917 02:37:00.720552    4370 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0917 02:37:00.720603    4370 ssh_runner.go:195] Run: systemctl --version
	I0917 02:37:00.722288    4370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 02:37:00.723926    4370 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:37:00.723956    4370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0917 02:37:00.726744    4370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0917 02:37:00.731362    4370 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:37:00.731372    4370 start.go:495] detecting cgroup driver to use...
	I0917 02:37:00.731448    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:37:00.738418    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0917 02:37:00.741987    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:37:00.745235    4370 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:37:00.745264    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:37:00.748205    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:37:00.751035    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:37:00.754319    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:37:00.757659    4370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:37:00.760764    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:37:00.763607    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:37:00.766695    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:37:00.770079    4370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:37:00.773070    4370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:37:00.775603    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:00.855135    4370 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:37:00.861492    4370 start.go:495] detecting cgroup driver to use...
	I0917 02:37:00.861547    4370 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:37:00.866663    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:37:00.871563    4370 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:37:00.880953    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:37:00.885538    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:37:00.890293    4370 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:37:00.952701    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:37:00.982586    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:37:00.988510    4370 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:37:00.990046    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:37:00.995360    4370 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0917 02:37:01.002424    4370 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:37:01.065582    4370 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:37:01.143698    4370 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:37:01.143762    4370 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:37:01.148689    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:01.226782    4370 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:37:02.390174    4370 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163378792s)
	I0917 02:37:02.390244    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:37:02.394655    4370 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:37:02.401255    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:37:02.405659    4370 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:37:02.486784    4370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:37:02.548590    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:02.610895    4370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:37:02.617570    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:37:02.622569    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:02.689333    4370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:37:02.730032    4370 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:37:02.730128    4370 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:37:02.732579    4370 start.go:563] Will wait 60s for crictl version
	I0917 02:37:02.732652    4370 ssh_runner.go:195] Run: which crictl
	I0917 02:37:02.734316    4370 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:37:02.750408    4370 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0917 02:37:02.750488    4370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:37:02.768013    4370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:37:02.789828    4370 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0917 02:37:02.789925    4370 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0917 02:37:02.791671    4370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:37:02.795716    4370 kubeadm.go:883] updating cluster {Name:stopped-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0917 02:37:02.795767    4370 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 02:37:02.795830    4370 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:37:02.810393    4370 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 02:37:02.810402    4370 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 02:37:02.810461    4370 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 02:37:02.814331    4370 ssh_runner.go:195] Run: which lz4
	I0917 02:37:02.816234    4370 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 02:37:02.817766    4370 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 02:37:02.817791    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0917 02:37:03.757489    4370 docker.go:649] duration metric: took 941.327208ms to copy over tarball
	I0917 02:37:03.757575    4370 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 02:37:02.591776    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:02.591902    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:37:02.603535    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:37:02.603620    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:37:02.615007    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:37:02.615092    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:37:02.626656    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:37:02.626732    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:37:02.638302    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:37:02.638392    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:37:02.649525    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:37:02.649611    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:37:02.660869    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:37:02.660958    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:37:02.671432    4234 logs.go:276] 0 containers: []
	W0917 02:37:02.671445    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:37:02.671520    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:37:02.682424    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:37:02.682440    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:37:02.682445    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:37:02.695334    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:37:02.695345    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:37:02.716305    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:37:02.716317    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:37:02.730299    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:37:02.730310    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:37:02.768683    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:37:02.768692    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:37:02.787021    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:37:02.787042    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:37:02.832629    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:37:02.832643    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:37:02.857329    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:37:02.857345    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:37:02.873175    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:37:02.873186    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:37:02.878298    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:37:02.878309    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:37:02.894354    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:37:02.894370    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:37:02.907650    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:37:02.907666    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:37:02.933297    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:37:02.933310    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:37:02.973293    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:37:02.973306    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:37:02.988668    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:37:02.988681    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:37:03.029630    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:37:03.029643    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:37:03.045948    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:37:03.045961    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:37:05.560764    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:04.905177    4370 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.147589125s)
	I0917 02:37:04.905190    4370 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 02:37:04.920994    4370 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 02:37:04.924088    4370 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0917 02:37:04.929011    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:04.990282    4370 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:37:06.375893    4370 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.385602208s)
	I0917 02:37:06.375999    4370 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:37:06.389038    4370 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 02:37:06.389048    4370 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 02:37:06.389053    4370 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 02:37:06.393875    4370 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:06.396603    4370 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:37:06.398832    4370 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:06.398974    4370 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:37:06.401316    4370 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:37:06.401341    4370 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:37:06.402622    4370 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:37:06.402558    4370 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:37:06.404538    4370 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:37:06.404539    4370 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0917 02:37:06.405962    4370 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0917 02:37:06.405994    4370 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:37:06.407264    4370 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0917 02:37:06.407293    4370 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:37:06.408063    4370 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0917 02:37:06.408978    4370 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:37:06.838100    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:37:06.839133    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:37:06.848715    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:37:06.849786    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:37:06.852323    4370 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0917 02:37:06.852338    4370 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0917 02:37:06.852346    4370 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:37:06.852347    4370 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:37:06.852392    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:37:06.852452    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:37:06.864378    4370 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0917 02:37:06.864401    4370 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:37:06.864466    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:37:06.872000    4370 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0917 02:37:06.872024    4370 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:37:06.872087    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:37:06.877755    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0917 02:37:06.879391    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0917 02:37:06.881610    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0917 02:37:06.881678    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0917 02:37:06.881681    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0917 02:37:06.892068    4370 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0917 02:37:06.892211    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:37:06.897817    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0917 02:37:06.900509    4370 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0917 02:37:06.900527    4370 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0917 02:37:06.900544    4370 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0917 02:37:06.900553    4370 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0917 02:37:06.900584    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0917 02:37:06.900592    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0917 02:37:06.906760    4370 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0917 02:37:06.906777    4370 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:37:06.906833    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:37:06.919277    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0917 02:37:06.919340    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0917 02:37:06.919412    4370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0917 02:37:06.919413    4370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0917 02:37:06.923870    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0917 02:37:06.923891    4370 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0917 02:37:06.923901    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0917 02:37:06.923969    4370 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0917 02:37:06.923980    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0917 02:37:06.923990    4370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0917 02:37:06.932371    4370 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0917 02:37:06.932395    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0917 02:37:06.943657    4370 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0917 02:37:06.943671    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0917 02:37:07.016843    4370 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0917 02:37:07.038458    4370 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0917 02:37:07.038474    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0917 02:37:07.143170    4370 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0917 02:37:07.250602    4370 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0917 02:37:07.250629    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0917 02:37:07.277309    4370 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0917 02:37:07.277442    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:07.400874    4370 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0917 02:37:07.400912    4370 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0917 02:37:07.400937    4370 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:07.401013    4370 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:07.414248    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 02:37:07.414380    4370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 02:37:07.415749    4370 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0917 02:37:07.415763    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0917 02:37:07.446275    4370 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 02:37:07.446289    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0917 02:37:07.690313    4370 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 02:37:07.690356    4370 cache_images.go:92] duration metric: took 1.301302791s to LoadCachedImages
	W0917 02:37:07.690389    4370 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0917 02:37:07.690399    4370 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0917 02:37:07.690452    4370 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-288000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:37:07.690531    4370 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:37:07.704062    4370 cni.go:84] Creating CNI manager for ""
	I0917 02:37:07.704080    4370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:37:07.704087    4370 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:37:07.704099    4370 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-288000 NodeName:stopped-upgrade-288000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:37:07.704159    4370 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-288000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:37:07.704220    4370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0917 02:37:07.706884    4370 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:37:07.706914    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 02:37:07.709911    4370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0917 02:37:07.714702    4370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:37:07.719282    4370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0917 02:37:07.724615    4370 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0917 02:37:07.725761    4370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:37:07.729617    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:07.810834    4370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:37:07.821004    4370 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000 for IP: 10.0.2.15
	I0917 02:37:07.821015    4370 certs.go:194] generating shared ca certs ...
	I0917 02:37:07.821024    4370 certs.go:226] acquiring lock for ca certs: {Name:mkff5fc329c6145be4c1381e1b58175b65aa8cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:07.821195    4370 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.key
	I0917 02:37:07.821273    4370 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.key
	I0917 02:37:07.821280    4370 certs.go:256] generating profile certs ...
	I0917 02:37:07.821356    4370 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/client.key
	I0917 02:37:07.821375    4370 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key.a0c8013c
	I0917 02:37:07.821384    4370 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt.a0c8013c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0917 02:37:07.896905    4370 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt.a0c8013c ...
	I0917 02:37:07.896922    4370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt.a0c8013c: {Name:mk7a15f968916d0ad32e297bea40826c255d208a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:07.897212    4370 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key.a0c8013c ...
	I0917 02:37:07.897216    4370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key.a0c8013c: {Name:mk7883df2a29dfa3e4e916f1dc22deae5b84d83d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:07.897366    4370 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt.a0c8013c -> /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt
	I0917 02:37:07.897498    4370 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key.a0c8013c -> /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key
	I0917 02:37:07.897649    4370 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/proxy-client.key
	I0917 02:37:07.897780    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/1555.pem (1338 bytes)
	W0917 02:37:07.897813    4370 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/1555_empty.pem, impossibly tiny 0 bytes
	I0917 02:37:07.897819    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 02:37:07.897844    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem (1082 bytes)
	I0917 02:37:07.897865    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:37:07.897883    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem (1675 bytes)
	I0917 02:37:07.897925    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem (1708 bytes)
	I0917 02:37:07.898291    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:37:07.905549    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 02:37:07.911922    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:37:07.918651    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:37:07.925922    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 02:37:07.933212    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 02:37:07.940050    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:37:07.946589    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 02:37:07.953942    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/1555.pem --> /usr/share/ca-certificates/1555.pem (1338 bytes)
	I0917 02:37:07.960366    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem --> /usr/share/ca-certificates/15552.pem (1708 bytes)
	I0917 02:37:07.966729    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:37:07.973673    4370 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:37:07.978998    4370 ssh_runner.go:195] Run: openssl version
	I0917 02:37:07.980955    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15552.pem && ln -fs /usr/share/ca-certificates/15552.pem /etc/ssl/certs/15552.pem"
	I0917 02:37:07.983980    4370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15552.pem
	I0917 02:37:07.985285    4370 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:53 /usr/share/ca-certificates/15552.pem
	I0917 02:37:07.985309    4370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15552.pem
	I0917 02:37:07.986958    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15552.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:37:07.990225    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:37:07.993449    4370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:37:07.995071    4370 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:37:07.995090    4370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:37:07.996948    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:37:07.999620    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1555.pem && ln -fs /usr/share/ca-certificates/1555.pem /etc/ssl/certs/1555.pem"
	I0917 02:37:08.002788    4370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1555.pem
	I0917 02:37:08.004248    4370 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:53 /usr/share/ca-certificates/1555.pem
	I0917 02:37:08.004270    4370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1555.pem
	I0917 02:37:08.005892    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1555.pem /etc/ssl/certs/51391683.0"
	I0917 02:37:08.009048    4370 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:37:08.010369    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:37:08.012283    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:37:08.014051    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:37:08.016055    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:37:08.017852    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:37:08.019679    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:37:08.021421    4370 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 02:37:08.021502    4370 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:37:08.031427    4370 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:37:08.034815    4370 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:37:08.034826    4370 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:37:08.034853    4370 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:37:08.038511    4370 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:37:08.038800    4370 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-288000" does not appear in /Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:37:08.038895    4370 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1056/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-288000" cluster setting kubeconfig missing "stopped-upgrade-288000" context setting]
	I0917 02:37:08.039073    4370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/kubeconfig: {Name:mkb79e559d17024b096623143f764244ebf5b237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:08.039507    4370 kapi.go:59] client config for stopped-upgrade-288000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106395800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:37:08.039824    4370 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:37:08.042795    4370 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-288000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0917 02:37:08.042801    4370 kubeadm.go:1160] stopping kube-system containers ...
	I0917 02:37:08.042853    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:37:08.053693    4370 docker.go:483] Stopping containers: [5d12a44bd79e 7b4b71b6f19a d7b6ff64cafe b1296b57ee41 80dbf74e70dd 637480f75136 b459245dcdb4 7d82f00a9f22 2bd07895721d]
	I0917 02:37:08.053778    4370 ssh_runner.go:195] Run: docker stop 5d12a44bd79e 7b4b71b6f19a d7b6ff64cafe b1296b57ee41 80dbf74e70dd 637480f75136 b459245dcdb4 7d82f00a9f22 2bd07895721d
	I0917 02:37:08.064500    4370 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 02:37:08.069849    4370 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 02:37:08.072968    4370 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 02:37:08.072979    4370 kubeadm.go:157] found existing configuration files:
	
	I0917 02:37:08.073005    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0917 02:37:08.075421    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 02:37:08.075452    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 02:37:08.078215    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0917 02:37:08.081291    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 02:37:08.081314    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 02:37:08.083958    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0917 02:37:08.086595    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 02:37:08.086629    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 02:37:08.089888    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0917 02:37:08.092607    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 02:37:08.092631    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 02:37:08.095107    4370 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 02:37:08.098239    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:37:08.120898    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:37:08.574813    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:37:08.715064    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:37:08.735686    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:37:08.762550    4370 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:37:08.762655    4370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:37:09.264844    4370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:37:10.562907    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:10.563036    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:37:10.574710    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:37:10.574808    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:37:10.585738    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:37:10.585832    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:37:10.597031    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:37:10.597109    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:37:10.609313    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:37:10.609408    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:37:10.620787    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:37:10.620875    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:37:10.631347    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:37:10.631432    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:37:10.641700    4234 logs.go:276] 0 containers: []
	W0917 02:37:10.641714    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:37:10.641786    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:37:10.652386    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:37:10.652401    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:37:10.652406    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:37:10.676185    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:37:10.676194    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:37:10.712825    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:37:10.712834    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:37:10.717106    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:37:10.717112    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:37:10.751793    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:37:10.751805    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:37:10.766587    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:37:10.766596    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:37:10.791793    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:37:10.791809    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:37:10.803550    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:37:10.803562    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:37:10.814983    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:37:10.814994    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:37:10.829935    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:37:10.829949    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:37:10.847764    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:37:10.847777    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:37:10.859136    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:37:10.859147    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:37:10.873144    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:37:10.873157    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:37:10.888123    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:37:10.888137    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:37:10.926899    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:37:10.926913    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:37:10.941301    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:37:10.941312    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:37:10.955254    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:37:10.955264    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:37:09.763724    4370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:37:09.768163    4370 api_server.go:72] duration metric: took 1.005618792s to wait for apiserver process to appear ...
	I0917 02:37:09.768172    4370 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:37:09.768187    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:13.474814    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:14.770272    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:14.770350    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:18.475254    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:18.475524    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:37:18.501330    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:37:18.501453    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:37:18.514723    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:37:18.514812    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:37:18.526482    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:37:18.526565    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:37:18.537168    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:37:18.537259    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:37:18.548607    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:37:18.548685    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:37:18.559104    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:37:18.559187    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:37:18.569481    4234 logs.go:276] 0 containers: []
	W0917 02:37:18.569496    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:37:18.569558    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:37:18.579596    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:37:18.579614    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:37:18.579620    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:37:18.594636    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:37:18.594647    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:37:18.610172    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:37:18.610182    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:37:18.627876    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:37:18.627886    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:37:18.632147    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:37:18.632157    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:37:18.645742    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:37:18.645753    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:37:18.659029    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:37:18.659040    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:37:18.672988    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:37:18.672998    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:37:18.684500    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:37:18.684510    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:37:18.699205    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:37:18.699215    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:37:18.710798    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:37:18.710809    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:37:18.728996    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:37:18.729008    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:37:18.740899    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:37:18.740913    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:37:18.777833    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:37:18.777842    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:37:18.813235    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:37:18.813248    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:37:18.853246    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:37:18.853271    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:37:18.872571    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:37:18.872585    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:37:21.400375    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:19.770626    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:19.770692    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:26.402679    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:26.402829    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:37:26.414322    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:37:26.414408    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:37:26.431854    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:37:26.431953    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:37:26.448689    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:37:26.448778    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:37:26.460489    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:37:26.460576    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:37:26.478669    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:37:26.478759    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:37:26.491205    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:37:26.491294    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:37:26.502415    4234 logs.go:276] 0 containers: []
	W0917 02:37:26.502430    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:37:26.502511    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:37:26.514315    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:37:26.514333    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:37:26.514339    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:37:26.552675    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:37:26.552688    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:37:26.566396    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:37:26.566405    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:37:26.582118    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:37:26.582132    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:37:26.596269    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:37:26.596282    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:37:26.607746    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:37:26.607756    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:37:26.632247    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:37:26.632260    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:37:26.668582    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:37:26.668596    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:37:26.673072    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:37:26.673079    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:37:24.771186    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:24.771245    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:26.707993    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:37:26.708004    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:37:26.731677    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:37:26.731690    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:37:26.746639    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:37:26.746649    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:37:26.761210    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:37:26.761220    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:37:26.774134    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:37:26.774147    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:37:26.790101    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:37:26.790111    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:37:26.807533    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:37:26.807542    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:37:26.819672    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:37:26.819684    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:37:29.333637    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:29.771934    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:29.771981    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:34.335850    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:34.336127    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:37:34.358512    4234 logs.go:276] 2 containers: [ed3c91d07cc5 a2fd9db7db24]
	I0917 02:37:34.358630    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:37:34.375001    4234 logs.go:276] 2 containers: [8e15a0a3e969 8a41a9b8943b]
	I0917 02:37:34.375096    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:37:34.387431    4234 logs.go:276] 1 containers: [0874f7991b81]
	I0917 02:37:34.387522    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:37:34.398228    4234 logs.go:276] 2 containers: [ab5646676500 d5ee745e2bc1]
	I0917 02:37:34.398317    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:37:34.408735    4234 logs.go:276] 1 containers: [9a482fbc7c5c]
	I0917 02:37:34.408817    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:37:34.423117    4234 logs.go:276] 2 containers: [d4b5e4e0feea 678c01eacfd1]
	I0917 02:37:34.423192    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:37:34.434472    4234 logs.go:276] 0 containers: []
	W0917 02:37:34.434483    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:37:34.434548    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:37:34.445304    4234 logs.go:276] 2 containers: [c27cefb5755c 2f4533c64d10]
	I0917 02:37:34.445322    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:37:34.445328    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:37:34.483252    4234 logs.go:123] Gathering logs for kube-apiserver [ed3c91d07cc5] ...
	I0917 02:37:34.483263    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed3c91d07cc5"
	I0917 02:37:34.501718    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:37:34.501727    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:37:34.525311    4234 logs.go:123] Gathering logs for etcd [8a41a9b8943b] ...
	I0917 02:37:34.525319    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a41a9b8943b"
	I0917 02:37:34.539582    4234 logs.go:123] Gathering logs for kube-scheduler [ab5646676500] ...
	I0917 02:37:34.539592    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab5646676500"
	I0917 02:37:34.553857    4234 logs.go:123] Gathering logs for kube-controller-manager [d4b5e4e0feea] ...
	I0917 02:37:34.553871    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4b5e4e0feea"
	I0917 02:37:34.570913    4234 logs.go:123] Gathering logs for kube-controller-manager [678c01eacfd1] ...
	I0917 02:37:34.570923    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 678c01eacfd1"
	I0917 02:37:34.591165    4234 logs.go:123] Gathering logs for storage-provisioner [2f4533c64d10] ...
	I0917 02:37:34.591180    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f4533c64d10"
	I0917 02:37:34.602140    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:37:34.602151    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:37:34.636482    4234 logs.go:123] Gathering logs for coredns [0874f7991b81] ...
	I0917 02:37:34.636499    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0874f7991b81"
	I0917 02:37:34.647603    4234 logs.go:123] Gathering logs for kube-proxy [9a482fbc7c5c] ...
	I0917 02:37:34.647615    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a482fbc7c5c"
	I0917 02:37:34.659871    4234 logs.go:123] Gathering logs for storage-provisioner [c27cefb5755c] ...
	I0917 02:37:34.659884    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c27cefb5755c"
	I0917 02:37:34.671813    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:37:34.671828    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:37:34.676280    4234 logs.go:123] Gathering logs for kube-apiserver [a2fd9db7db24] ...
	I0917 02:37:34.676286    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fd9db7db24"
	I0917 02:37:34.713945    4234 logs.go:123] Gathering logs for etcd [8e15a0a3e969] ...
	I0917 02:37:34.713955    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e15a0a3e969"
	I0917 02:37:34.739427    4234 logs.go:123] Gathering logs for kube-scheduler [d5ee745e2bc1] ...
	I0917 02:37:34.739437    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5ee745e2bc1"
	I0917 02:37:34.754880    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:37:34.754891    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:37:34.772335    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:34.772355    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:37.269710    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:42.272478    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:42.272564    4234 kubeadm.go:597] duration metric: took 4m4.598357583s to restartPrimaryControlPlane
	W0917 02:37:42.272621    4234 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 02:37:42.272650    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0917 02:37:43.256734    4234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:37:43.261654    4234 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 02:37:43.264574    4234 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 02:37:43.267301    4234 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 02:37:43.267309    4234 kubeadm.go:157] found existing configuration files:
	
	I0917 02:37:43.267343    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf
	I0917 02:37:43.270680    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 02:37:43.270715    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 02:37:43.273969    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf
	I0917 02:37:43.276457    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 02:37:43.276484    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 02:37:43.279238    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf
	I0917 02:37:43.282313    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 02:37:43.282340    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 02:37:43.285494    4234 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf
	I0917 02:37:43.288091    4234 kubeadm.go:163] "https://control-plane.minikube.internal:50268" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50268 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 02:37:43.288113    4234 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 02:37:43.291118    4234 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 02:37:43.311263    4234 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0917 02:37:43.311303    4234 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 02:37:43.365653    4234 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 02:37:43.365712    4234 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 02:37:43.365799    4234 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 02:37:43.416175    4234 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 02:37:43.420329    4234 out.go:235]   - Generating certificates and keys ...
	I0917 02:37:43.420369    4234 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 02:37:43.420404    4234 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 02:37:43.420440    4234 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 02:37:43.420475    4234 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 02:37:43.420520    4234 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 02:37:43.420604    4234 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 02:37:43.420679    4234 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 02:37:43.420757    4234 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 02:37:43.420799    4234 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 02:37:43.420838    4234 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 02:37:43.420861    4234 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 02:37:43.420898    4234 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 02:37:43.475379    4234 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 02:37:43.508482    4234 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 02:37:43.548221    4234 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 02:37:43.607939    4234 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 02:37:43.647552    4234 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 02:37:43.647970    4234 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 02:37:43.648013    4234 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 02:37:43.732754    4234 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 02:37:39.773127    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:39.773188    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:43.739879    4234 out.go:235]   - Booting up control plane ...
	I0917 02:37:43.739930    4234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 02:37:43.739979    4234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 02:37:43.740017    4234 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 02:37:43.740060    4234 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 02:37:43.740144    4234 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 02:37:48.235630    4234 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503029 seconds
	I0917 02:37:48.235752    4234 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 02:37:48.240355    4234 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 02:37:48.759541    4234 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 02:37:48.759840    4234 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-202000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 02:37:49.264006    4234 kubeadm.go:310] [bootstrap-token] Using token: 7pag7d.3y4wox6ghhmt7q13
	I0917 02:37:44.773488    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:44.773510    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:49.270412    4234 out.go:235]   - Configuring RBAC rules ...
	I0917 02:37:49.270475    4234 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 02:37:49.270533    4234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 02:37:49.273899    4234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 02:37:49.274871    4234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 02:37:49.275786    4234 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 02:37:49.276646    4234 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 02:37:49.279989    4234 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 02:37:49.441465    4234 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 02:37:49.670371    4234 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 02:37:49.670780    4234 kubeadm.go:310] 
	I0917 02:37:49.670810    4234 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 02:37:49.670813    4234 kubeadm.go:310] 
	I0917 02:37:49.670851    4234 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 02:37:49.670856    4234 kubeadm.go:310] 
	I0917 02:37:49.670870    4234 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 02:37:49.670903    4234 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 02:37:49.670927    4234 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 02:37:49.670933    4234 kubeadm.go:310] 
	I0917 02:37:49.670997    4234 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 02:37:49.671015    4234 kubeadm.go:310] 
	I0917 02:37:49.671043    4234 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 02:37:49.671048    4234 kubeadm.go:310] 
	I0917 02:37:49.671075    4234 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 02:37:49.671116    4234 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 02:37:49.671157    4234 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 02:37:49.671162    4234 kubeadm.go:310] 
	I0917 02:37:49.671206    4234 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 02:37:49.671247    4234 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 02:37:49.671250    4234 kubeadm.go:310] 
	I0917 02:37:49.671297    4234 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7pag7d.3y4wox6ghhmt7q13 \
	I0917 02:37:49.671358    4234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3105cdadd1e1eaa420c61face26906cf5212dd9c9efeb8ef9725bc0a50fd268d \
	I0917 02:37:49.671376    4234 kubeadm.go:310] 	--control-plane 
	I0917 02:37:49.671379    4234 kubeadm.go:310] 
	I0917 02:37:49.671425    4234 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 02:37:49.671429    4234 kubeadm.go:310] 
	I0917 02:37:49.671472    4234 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7pag7d.3y4wox6ghhmt7q13 \
	I0917 02:37:49.671532    4234 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3105cdadd1e1eaa420c61face26906cf5212dd9c9efeb8ef9725bc0a50fd268d 
	I0917 02:37:49.671605    4234 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 02:37:49.671613    4234 cni.go:84] Creating CNI manager for ""
	I0917 02:37:49.671621    4234 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:37:49.679335    4234 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 02:37:49.682400    4234 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 02:37:49.685731    4234 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 02:37:49.690456    4234 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 02:37:49.690501    4234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 02:37:49.690527    4234 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-202000 minikube.k8s.io/updated_at=2024_09_17T02_37_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=running-upgrade-202000 minikube.k8s.io/primary=true
	I0917 02:37:49.728069    4234 kubeadm.go:1113] duration metric: took 37.606292ms to wait for elevateKubeSystemPrivileges
	I0917 02:37:49.728078    4234 ops.go:34] apiserver oom_adj: -16
	I0917 02:37:49.729831    4234 kubeadm.go:394] duration metric: took 4m12.0692795s to StartCluster
	I0917 02:37:49.729846    4234 settings.go:142] acquiring lock: {Name:mk2d861f3b7e502753ec34b4d96136a66d57e5dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:49.729938    4234 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:37:49.730314    4234 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/kubeconfig: {Name:mkb79e559d17024b096623143f764244ebf5b237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:49.730541    4234 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:37:49.730585    4234 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:37:49.730616    4234 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-202000"
	I0917 02:37:49.730624    4234 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-202000"
	I0917 02:37:49.730625    4234 config.go:182] Loaded profile config "running-upgrade-202000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	W0917 02:37:49.730628    4234 addons.go:243] addon storage-provisioner should already be in state true
	I0917 02:37:49.730638    4234 host.go:66] Checking if "running-upgrade-202000" exists ...
	I0917 02:37:49.730675    4234 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-202000"
	I0917 02:37:49.730684    4234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-202000"
	I0917 02:37:49.730940    4234 retry.go:31] will retry after 631.331049ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/running-upgrade-202000/monitor: connect: connection refused
	I0917 02:37:49.731661    4234 kapi.go:59] client config for running-upgrade-202000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/running-upgrade-202000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106385800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:37:49.731782    4234 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-202000"
	W0917 02:37:49.731787    4234 addons.go:243] addon default-storageclass should already be in state true
	I0917 02:37:49.731793    4234 host.go:66] Checking if "running-upgrade-202000" exists ...
	I0917 02:37:49.732320    4234 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 02:37:49.732326    4234 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 02:37:49.732332    4234 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/running-upgrade-202000/id_rsa Username:docker}
	I0917 02:37:49.734300    4234 out.go:177] * Verifying Kubernetes components...
	I0917 02:37:49.742167    4234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:49.833069    4234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:37:49.837864    4234 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:37:49.837918    4234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:37:49.840335    4234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 02:37:49.843083    4234 api_server.go:72] duration metric: took 112.53125ms to wait for apiserver process to appear ...
	I0917 02:37:49.843094    4234 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:37:49.843101    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:50.145995    4234 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 02:37:50.146006    4234 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 02:37:50.368094    4234 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:50.372199    4234 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 02:37:50.372207    4234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 02:37:50.372218    4234 sshutil.go:53] new ssh client: &{IP:localhost Port:50236 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/running-upgrade-202000/id_rsa Username:docker}
	I0917 02:37:50.402086    4234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 02:37:49.774316    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:49.774336    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:54.844907    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:54.844969    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:54.775390    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:54.775471    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:59.843536    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:59.843568    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:59.775752    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:59.775782    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:04.842517    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:04.842566    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:04.776575    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:04.776614    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:09.842073    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:09.842091    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:09.777845    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:09.777948    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:09.789459    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:09.789546    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:09.800188    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:09.800281    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:09.810697    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:09.810764    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:09.820945    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:09.821026    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:09.831690    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:09.831785    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:09.842323    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:09.842396    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:09.852578    4370 logs.go:276] 0 containers: []
	W0917 02:38:09.852590    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:09.852653    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:09.862907    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:09.862927    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:09.862932    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:09.876843    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:09.876857    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:09.890816    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:09.890827    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:09.909822    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:09.909832    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:09.914066    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:09.914076    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:09.924775    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:09.924786    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:09.936512    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:09.936522    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:09.958677    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:09.958690    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:09.976911    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:09.976922    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:09.988274    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:09.988284    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:10.064951    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:10.064962    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:10.089625    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:10.089636    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:10.101742    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:10.101754    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:10.139589    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:10.139608    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:10.181334    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:10.181344    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:10.196072    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:10.196083    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:10.207824    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:10.207837    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:12.720562    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:14.842011    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:14.842092    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:17.722213    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:17.722492    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:17.742352    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:17.742485    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:17.756907    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:17.757003    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:17.768800    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:17.768891    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:17.780531    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:17.780616    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:17.790780    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:17.790859    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:17.805596    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:17.805678    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:17.816348    4370 logs.go:276] 0 containers: []
	W0917 02:38:17.816359    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:17.816434    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:17.831714    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:17.831729    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:17.831734    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:17.848430    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:17.848440    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:17.852692    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:17.852699    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:17.866879    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:17.866893    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:17.906296    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:17.906308    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:17.919406    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:17.919417    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:17.937641    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:17.937655    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:17.949563    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:17.949574    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:17.964341    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:17.964355    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:17.975598    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:17.975611    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:17.988088    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:17.988101    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:18.025698    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:18.025712    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:18.063356    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:18.063369    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:18.088250    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:18.088266    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:18.113996    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:18.114003    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:18.135115    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:18.135126    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:18.146632    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:18.146644    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:19.842820    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:19.842884    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0917 02:38:20.142031    4234 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0917 02:38:20.146864    4234 out.go:177] * Enabled addons: storage-provisioner
	I0917 02:38:20.153786    4234 addons.go:510] duration metric: took 30.429468292s for enable addons: enabled=[storage-provisioner]
	I0917 02:38:20.659410    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:24.843903    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:24.843999    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:25.659740    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:25.660032    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:25.679922    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:25.680063    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:25.695858    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:25.695947    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:25.710504    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:25.710602    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:25.726035    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:25.726123    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:25.743266    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:25.743351    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:25.754147    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:25.754235    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:25.769103    4370 logs.go:276] 0 containers: []
	W0917 02:38:25.769118    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:25.769201    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:25.779934    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:25.779952    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:25.779958    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:25.793748    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:25.793762    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:25.806719    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:25.806735    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:25.818285    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:25.818298    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:25.860098    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:25.860114    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:25.899363    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:25.899376    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:25.924935    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:25.924942    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:25.940131    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:25.940143    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:25.958917    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:25.958931    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:25.980345    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:25.980354    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:25.995835    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:25.995850    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:26.010476    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:26.010492    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:26.021823    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:26.021834    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:26.033474    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:26.033484    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:26.046892    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:26.046903    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:26.059106    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:26.059116    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:26.102361    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:26.102372    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:28.609693    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:29.845671    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:29.845722    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:33.612048    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:33.612303    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:33.629109    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:33.629199    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:33.641012    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:33.641100    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:33.660107    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:33.660181    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:33.670609    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:33.670697    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:33.681218    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:33.681294    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:33.694017    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:33.694090    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:33.704609    4370 logs.go:276] 0 containers: []
	W0917 02:38:33.704622    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:33.704685    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:33.719692    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:33.719711    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:33.719717    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:33.733603    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:33.733615    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:33.744996    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:33.745006    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:33.770181    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:33.770194    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:33.807146    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:33.807156    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:33.818552    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:33.818565    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:33.833516    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:33.833527    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:33.854305    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:33.854319    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:33.866080    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:33.866090    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:33.878414    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:33.878429    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:33.915404    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:33.915416    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:33.929260    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:33.929272    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:33.967433    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:33.967448    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:33.982029    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:33.982039    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:33.986674    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:33.986682    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:34.000140    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:34.000152    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:34.017031    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:34.017041    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:34.847451    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:34.847536    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:36.534369    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:39.849944    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:39.849969    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:41.536771    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:41.537028    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:41.559782    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:41.559904    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:41.582471    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:41.582571    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:41.594548    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:41.594637    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:41.605259    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:41.605346    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:41.615429    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:41.615512    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:41.626045    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:41.626127    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:41.635839    4370 logs.go:276] 0 containers: []
	W0917 02:38:41.635851    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:41.635928    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:41.650783    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:41.650800    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:41.650806    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:41.661633    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:41.661644    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:41.673663    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:41.673674    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:41.690741    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:41.690750    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:41.701901    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:41.701915    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:41.726963    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:41.726971    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:41.738662    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:41.738675    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:41.742730    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:41.742737    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:41.780890    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:41.780904    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:41.795478    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:41.795491    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:41.807206    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:41.807238    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:41.821710    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:41.821723    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:41.860797    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:41.860809    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:41.898349    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:41.898360    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:41.912088    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:41.912098    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:41.933283    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:41.933297    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:41.948149    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:41.948165    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:44.851932    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:44.852048    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:44.462277    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:49.854456    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:49.854563    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:49.879590    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:38:49.879688    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:49.900125    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:38:49.900214    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:49.911202    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:38:49.911294    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:49.921866    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:38:49.921947    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:49.934701    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:38:49.934790    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:49.945126    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:38:49.945197    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:49.955919    4234 logs.go:276] 0 containers: []
	W0917 02:38:49.955933    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:49.956002    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:49.967975    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:38:49.967988    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:49.967993    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:50.006883    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:38:50.006894    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:38:50.025643    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:38:50.025652    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:38:50.038642    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:38:50.038653    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:38:50.060410    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:38:50.060421    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:50.073452    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:50.073465    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:50.107104    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:50.107116    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:50.111725    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:38:50.111734    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:38:50.123207    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:38:50.123219    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:38:50.137814    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:38:50.137823    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:38:50.149511    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:50.149526    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:50.174375    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:38:50.174385    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:38:50.188985    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:38:50.188996    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:38:49.463316    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:49.463550    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:49.480884    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:49.480993    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:49.494219    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:49.494317    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:49.506052    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:49.506143    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:49.516567    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:49.516652    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:49.533127    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:49.533210    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:49.546037    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:49.546126    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:49.556042    4370 logs.go:276] 0 containers: []
	W0917 02:38:49.556054    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:49.556119    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:49.566713    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:49.566734    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:49.566740    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:49.601409    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:49.601420    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:49.613453    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:49.613466    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:49.627429    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:49.627441    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:49.642383    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:49.642396    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:49.666249    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:49.666259    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:49.679602    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:49.679616    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:49.701140    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:49.701154    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:49.712511    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:49.712522    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:49.726656    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:49.726669    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:49.741085    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:49.741095    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:49.752504    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:49.752518    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:49.769722    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:49.769735    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:49.781434    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:49.781448    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:49.818590    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:49.818600    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:49.823502    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:49.823519    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:49.862352    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:49.862369    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:52.378975    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:52.712181    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:57.381360    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:57.381569    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:57.394344    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:57.394436    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:57.406139    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:57.406224    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:57.416881    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:57.416995    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:57.430582    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:57.430668    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:57.440973    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:57.441057    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:57.451455    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:57.451535    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:57.461502    4370 logs.go:276] 0 containers: []
	W0917 02:38:57.461513    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:57.461581    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:57.472158    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:57.472179    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:57.472185    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:57.509548    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:57.509555    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:57.550227    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:57.550245    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:57.563798    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:57.563808    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:57.574831    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:57.574841    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:57.595724    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:57.595740    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:57.613889    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:57.613899    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:57.628803    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:57.628814    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:57.639802    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:57.639813    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:57.657533    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:57.657548    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:57.695308    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:57.695319    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:57.707006    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:57.707016    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:57.718764    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:57.718777    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:57.723740    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:57.723750    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:57.744970    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:57.744981    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:57.757190    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:57.757202    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:57.781544    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:57.781556    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:57.714364    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:57.714462    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:57.726383    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:38:57.726478    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:57.737750    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:38:57.737836    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:57.750630    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:38:57.750722    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:57.762560    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:38:57.762645    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:57.777148    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:38:57.777238    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:57.795473    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:38:57.795554    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:57.810182    4234 logs.go:276] 0 containers: []
	W0917 02:38:57.810194    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:57.810273    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:57.821229    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:38:57.821247    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:57.821252    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:57.825821    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:57.825830    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:57.859303    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:57.859315    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:57.884656    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:38:57.884667    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:38:57.896790    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:38:57.896806    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:38:57.914428    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:57.914438    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:57.950741    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:38:57.950758    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:38:57.967918    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:38:57.967933    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:38:57.981878    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:38:57.981893    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:38:57.993228    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:38:57.993244    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:38:58.005109    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:38:58.005125    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:38:58.019507    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:38:58.019521    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:38:58.031846    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:38:58.031858    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:00.544879    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:00.296593    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:05.547077    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:05.547193    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:05.559818    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:05.559911    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:05.571625    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:05.571717    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:05.583725    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:05.583816    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:05.599657    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:05.599751    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:05.611416    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:05.611509    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:05.623590    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:05.623666    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:05.635805    4234 logs.go:276] 0 containers: []
	W0917 02:39:05.635815    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:05.635893    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:05.647388    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:05.647403    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:05.647409    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:05.660270    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:05.660283    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:05.685613    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:05.685623    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:05.697821    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:05.697832    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:05.733746    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:05.733764    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:05.738260    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:05.738267    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:05.772570    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:05.772584    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:05.784895    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:05.784907    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:05.796765    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:05.796780    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:05.811700    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:05.811710    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:05.826060    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:05.826074    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:05.837611    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:05.837624    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:05.855140    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:05.855149    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:05.299094    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:05.299345    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:05.318267    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:05.318384    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:05.333887    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:05.333981    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:05.345712    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:05.345793    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:05.356522    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:05.356598    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:05.366735    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:05.366800    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:05.377570    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:05.377657    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:05.390001    4370 logs.go:276] 0 containers: []
	W0917 02:39:05.390014    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:05.390090    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:05.402132    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:05.402151    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:05.402157    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:05.414460    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:05.414471    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:05.451940    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:05.451953    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:05.468008    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:05.468022    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:05.487207    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:05.487216    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:05.498187    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:05.498197    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:05.535702    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:05.535712    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:05.571494    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:05.571506    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:05.587346    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:05.587362    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:05.599674    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:05.599685    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:05.622145    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:05.622163    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:05.635536    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:05.635548    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:05.651539    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:05.651550    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:05.678041    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:05.678055    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:05.682811    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:05.682817    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:05.698520    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:05.698528    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:05.713327    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:05.713342    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:08.226799    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:08.374515    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:13.229126    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:13.229322    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:13.242866    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:13.242960    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:13.254052    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:13.254148    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:13.264368    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:13.264442    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:13.275012    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:13.275099    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:13.285586    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:13.285659    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:13.297285    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:13.297377    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:13.306915    4370 logs.go:276] 0 containers: []
	W0917 02:39:13.306926    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:13.307003    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:13.317394    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:13.317413    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:13.317418    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:13.321608    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:13.321618    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:13.337279    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:13.337288    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:13.355008    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:13.355018    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:13.368260    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:13.368270    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:13.379975    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:13.379986    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:13.419731    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:13.419747    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:13.434487    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:13.434500    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:13.450372    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:13.450385    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:13.462879    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:13.462892    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:13.475634    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:13.475647    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:13.494730    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:13.494746    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:13.521449    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:13.521471    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:13.560048    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:13.560059    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:13.599059    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:13.599080    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:13.615308    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:13.615317    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:13.627960    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:13.627975    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:13.376754    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:13.376860    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:13.388792    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:13.388885    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:13.400119    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:13.400200    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:13.411320    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:13.411408    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:13.423202    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:13.423291    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:13.434925    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:13.435017    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:13.446445    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:13.446533    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:13.457908    4234 logs.go:276] 0 containers: []
	W0917 02:39:13.457921    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:13.458003    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:13.469375    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:13.469392    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:13.469398    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:13.484485    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:13.484496    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:13.497628    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:13.497639    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:13.510290    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:13.510302    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:13.531700    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:13.531717    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:13.568369    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:13.568389    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:13.573434    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:13.573448    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:13.615085    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:13.615095    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:13.631171    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:13.631181    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:13.656047    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:13.656060    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:13.668401    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:13.668414    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:13.681398    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:13.681409    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:13.699894    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:13.699908    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:16.213358    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:16.153941    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:21.215510    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:21.215623    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:21.227149    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:21.227241    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:21.238483    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:21.238580    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:21.250669    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:21.250719    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:21.261695    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:21.261780    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:21.273998    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:21.274093    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:21.291231    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:21.291320    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:21.302430    4234 logs.go:276] 0 containers: []
	W0917 02:39:21.302441    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:21.302518    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:21.313696    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:21.313713    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:21.313719    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:21.326534    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:21.326547    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:21.343368    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:21.343378    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:21.355708    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:21.355720    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:21.372300    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:21.372312    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:21.409327    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:21.409337    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:21.424957    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:21.424971    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:21.440466    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:21.440473    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:21.453291    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:21.453301    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:21.479750    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:21.479771    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:21.492743    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:21.492758    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:21.498433    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:21.498442    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:21.538223    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:21.538234    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:21.156234    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:21.156585    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:21.177778    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:21.177901    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:21.193179    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:21.193271    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:21.205267    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:21.205359    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:21.216241    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:21.216287    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:21.227950    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:21.227994    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:21.239499    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:21.239550    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:21.250260    4370 logs.go:276] 0 containers: []
	W0917 02:39:21.250272    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:21.250349    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:21.265164    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:21.265183    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:21.265189    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:21.301991    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:21.302003    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:21.316101    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:21.316112    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:21.338236    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:21.338261    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:21.378872    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:21.378905    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:21.383784    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:21.383792    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:21.396288    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:21.396300    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:21.420761    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:21.420777    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:21.440131    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:21.440142    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:21.452617    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:21.452630    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:21.466304    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:21.466315    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:21.520119    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:21.520139    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:21.535719    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:21.535732    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:21.552627    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:21.552639    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:21.574398    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:21.574411    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:21.588594    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:21.588608    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:21.603336    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:21.603349    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:24.120159    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:24.058489    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:29.122456    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:29.122550    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:29.133802    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:29.133879    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:29.145217    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:29.145294    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:29.156780    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:29.156861    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:29.168128    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:29.168216    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:29.179169    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:29.179253    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:29.190476    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:29.190557    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:29.201121    4370 logs.go:276] 0 containers: []
	W0917 02:39:29.201133    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:29.201211    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:29.212250    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:29.212266    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:29.212272    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:29.217067    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:29.217076    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:29.253524    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:29.253534    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:29.266227    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:29.266238    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:29.060844    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:29.061045    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:29.072708    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:29.072805    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:29.083529    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:29.083617    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:29.094224    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:29.094306    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:29.105075    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:29.105278    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:29.115877    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:29.115957    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:29.127802    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:29.127902    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:29.142555    4234 logs.go:276] 0 containers: []
	W0917 02:39:29.142566    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:29.142639    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:29.158001    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:29.158011    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:29.158016    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:29.194453    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:29.194472    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:29.199488    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:29.199499    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:29.237175    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:29.237188    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:29.251436    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:29.251451    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:29.266381    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:29.266390    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:29.281141    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:29.281158    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:29.293779    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:29.293796    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:29.309100    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:29.309115    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:29.321376    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:29.321389    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:29.344237    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:29.344252    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:29.357385    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:29.357396    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:29.383844    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:29.383854    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:29.281771    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:29.281780    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:29.294920    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:29.294928    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:29.310036    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:29.310045    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:29.330937    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:29.330950    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:29.371126    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:29.371137    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:29.383377    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:29.383390    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:29.395701    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:29.395714    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:29.417598    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:29.417613    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:29.429725    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:29.429741    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:29.452650    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:29.452659    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:29.464762    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:29.464772    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:29.504417    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:29.504425    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:29.518477    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:29.518505    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:32.036694    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:31.898895    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:37.038965    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:37.039065    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:37.051570    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:37.051656    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:37.063881    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:37.063968    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:37.077317    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:37.077407    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:37.090244    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:37.090337    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:37.101379    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:37.101460    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:37.112926    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:37.113014    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:37.124038    4370 logs.go:276] 0 containers: []
	W0917 02:39:37.124050    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:37.124128    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:37.135923    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:37.135942    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:37.135948    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:37.149276    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:37.149290    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:37.161306    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:37.161320    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:37.185827    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:37.185844    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:37.201175    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:37.201184    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:37.213652    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:37.213663    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:37.225621    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:37.225633    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:37.248807    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:37.248819    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:37.260340    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:37.260351    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:37.297553    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:37.297561    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:37.334600    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:37.334612    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:37.348864    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:37.348873    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:37.366405    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:37.366416    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:37.370426    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:37.370435    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:37.384041    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:37.384050    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:37.399330    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:37.399342    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:37.411859    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:37.411871    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:36.901229    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:36.901498    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:36.920306    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:36.920417    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:36.936166    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:36.936261    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:36.947259    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:36.947341    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:36.958104    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:36.958179    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:36.969249    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:36.969337    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:36.980147    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:36.980237    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:36.990358    4234 logs.go:276] 0 containers: []
	W0917 02:39:36.990370    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:36.990445    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:37.001039    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:37.001054    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:37.001060    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:37.035840    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:37.035853    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:37.041119    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:37.041127    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:37.056662    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:37.056675    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:37.072209    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:37.072226    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:37.085182    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:37.085194    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:37.123969    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:37.123981    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:37.140450    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:37.140462    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:37.155883    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:37.155897    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:37.168721    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:37.168734    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:37.186621    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:37.186632    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:37.199223    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:37.199239    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:37.225695    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:37.225706    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:39.739980    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:39.946811    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:44.742231    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:44.742410    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:44.754546    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:44.754637    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:44.765093    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:44.765177    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:44.775976    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:44.776065    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:44.786524    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:44.786610    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:44.796965    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:44.797041    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:44.808466    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:44.808539    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:44.818583    4234 logs.go:276] 0 containers: []
	W0917 02:39:44.818597    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:44.818672    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:44.828512    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:44.828527    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:44.828533    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:44.852442    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:44.852450    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:44.861082    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:44.861089    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:44.877665    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:44.877678    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:44.888865    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:44.888876    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:44.902564    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:44.902578    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:44.914252    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:44.914266    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:44.934008    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:44.934020    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:44.945514    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:44.945524    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:44.964317    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:44.964391    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:45.003161    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:45.003185    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:45.044785    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:45.044798    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:45.060381    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:45.060398    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:44.948485    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:44.948587    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:44.960218    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:44.960309    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:44.971623    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:44.971710    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:44.982852    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:44.982939    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:44.993167    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:44.993255    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:45.004514    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:45.004595    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:45.016006    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:45.016095    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:45.027555    4370 logs.go:276] 0 containers: []
	W0917 02:39:45.027568    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:45.027650    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:45.038796    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:45.038813    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:45.038818    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:45.053640    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:45.053652    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:45.066289    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:45.066304    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:45.078706    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:45.078718    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:45.095300    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:45.095315    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:45.110071    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:45.110084    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:45.121649    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:45.121665    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:45.161496    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:45.161506    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:45.194965    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:45.194980    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:45.209952    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:45.209968    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:45.213991    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:45.214000    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:45.251232    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:45.251249    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:45.264699    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:45.264712    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:45.275996    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:45.276009    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:45.300367    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:45.300374    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:45.312265    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:45.312280    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:45.323954    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:45.323964    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:47.849278    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:47.573663    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:52.851512    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:52.851700    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:52.863694    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:52.863771    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:52.886580    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:52.886672    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:52.898824    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:52.898910    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:52.910108    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:52.910195    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:52.921667    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:52.921761    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:52.934692    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:52.934779    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:52.945214    4370 logs.go:276] 0 containers: []
	W0917 02:39:52.945227    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:52.945303    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:52.957854    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:52.957871    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:52.957876    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:52.980050    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:52.980061    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:53.000845    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:53.000854    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:53.012315    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:53.012328    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:53.024882    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:53.024892    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:53.062544    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:53.062558    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:53.078085    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:53.078094    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:53.089976    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:53.089986    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:53.105234    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:53.105243    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:53.127797    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:53.127806    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:53.141965    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:53.141974    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:53.175406    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:53.175416    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:53.189860    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:53.189876    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:53.213548    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:53.213561    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:53.252324    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:53.252332    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:53.266190    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:53.266200    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:53.277359    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:53.277368    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:52.575888    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:52.576028    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:52.588105    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:39:52.588203    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:52.599241    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:39:52.599332    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:52.614674    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:39:52.614760    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:52.625313    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:39:52.625401    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:52.635996    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:39:52.636079    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:52.646441    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:39:52.646528    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:52.658034    4234 logs.go:276] 0 containers: []
	W0917 02:39:52.658047    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:52.658122    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:52.668716    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:39:52.668731    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:39:52.668737    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:39:52.680869    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:39:52.680881    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:39:52.698614    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:39:52.698628    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:39:52.709881    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:39:52.709892    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:52.721291    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:52.721304    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:52.725789    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:39:52.725796    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:39:52.739628    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:39:52.739642    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:39:52.754801    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:39:52.754812    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:39:52.765850    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:39:52.765863    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:39:52.780076    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:39:52.780092    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:39:52.792065    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:52.792077    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:52.817263    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:52.817271    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:52.853600    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:52.853611    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:55.435607    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:55.783425    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:00.436940    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:00.437184    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:00.459816    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:00.459944    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:00.476177    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:00.476275    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:00.490318    4234 logs.go:276] 2 containers: [1f429c6c263e 840bcd2c52c8]
	I0917 02:40:00.490406    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:00.501365    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:00.501441    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:00.513230    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:00.513313    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:00.526248    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:00.526327    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:00.541069    4234 logs.go:276] 0 containers: []
	W0917 02:40:00.541080    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:00.541144    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:00.552071    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:00.552084    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:00.552089    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:00.566329    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:00.566342    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:00.578521    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:00.578535    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:00.592929    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:00.592938    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:00.620566    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:00.620583    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:00.658106    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:00.658124    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:00.734568    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:00.734583    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:00.761718    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:00.761731    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:00.784282    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:00.784291    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:00.809508    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:00.809519    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:00.823593    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:00.823603    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:00.836586    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:00.836601    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:00.841794    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:00.841806    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:00.784188    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:00.784314    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:00.796324    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:00.796417    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:00.808932    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:00.809024    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:00.821005    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:00.821102    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:00.832105    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:00.832191    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:00.843487    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:00.843574    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:00.855060    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:00.855151    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:00.865942    4370 logs.go:276] 0 containers: []
	W0917 02:40:00.865953    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:00.866024    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:00.880531    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:00.880549    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:00.880555    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:00.884983    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:00.884992    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:00.923098    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:00.923111    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:00.936871    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:00.936886    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:00.954983    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:00.954995    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:00.969926    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:00.969939    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:00.992197    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:00.992205    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:01.029802    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:01.029808    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:01.063414    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:01.063429    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:01.078066    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:01.078076    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:01.098703    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:01.098714    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:01.110430    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:01.110440    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:01.128713    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:01.128723    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:01.140153    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:01.140170    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:01.154336    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:01.154346    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:01.166130    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:01.166141    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:01.179270    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:01.179282    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:03.692711    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:03.360571    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:08.695024    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:08.695124    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:08.706592    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:08.706675    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:08.717801    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:08.717893    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:08.734651    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:08.734734    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:08.752101    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:08.752190    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:08.762812    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:08.762900    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:08.773345    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:08.773434    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:08.784488    4370 logs.go:276] 0 containers: []
	W0917 02:40:08.784500    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:08.784572    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:08.794889    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:08.794907    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:08.794914    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:08.833523    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:08.833535    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:08.848982    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:08.848995    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:08.863586    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:08.863597    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:08.878209    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:08.878224    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:08.890801    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:08.890812    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:08.915701    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:08.915710    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:08.955299    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:08.955313    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:08.960095    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:08.960105    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:08.974995    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:08.975012    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:08.986513    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:08.986525    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:08.999570    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:08.999582    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:09.027319    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:09.027333    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:09.038569    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:09.038579    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:09.076539    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:09.076555    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:09.090015    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:09.090029    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:09.105078    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:09.105091    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:08.362907    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:08.363383    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:08.395685    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:08.395821    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:08.418049    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:08.418142    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:08.430714    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:08.430800    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:08.446145    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:08.446224    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:08.456952    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:08.457030    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:08.468233    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:08.468308    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:08.478657    4234 logs.go:276] 0 containers: []
	W0917 02:40:08.478671    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:08.478737    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:08.489444    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:08.489460    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:08.489466    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:08.500805    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:08.500815    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:08.515823    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:08.515832    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:08.541882    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:08.541890    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:08.556556    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:08.556567    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:08.594463    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:08.594474    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:08.609459    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:08.609472    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:08.621108    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:08.621119    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:08.655863    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:08.655871    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:08.660758    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:08.660766    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:08.675712    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:08.675722    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:08.689880    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:08.689894    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:08.702759    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:08.702772    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:08.716262    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:08.716274    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:08.735780    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:08.735789    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:11.251002    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:11.634177    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:16.253445    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:16.253755    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:16.279498    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:16.279626    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:16.296191    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:16.296298    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:16.309521    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:16.309618    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:16.320661    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:16.320733    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:16.331353    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:16.331440    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:16.342631    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:16.342712    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:16.352464    4234 logs.go:276] 0 containers: []
	W0917 02:40:16.352477    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:16.352540    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:16.363102    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:16.363119    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:16.363124    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:16.375263    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:16.375274    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:16.390687    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:16.390698    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:16.402610    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:16.402620    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:16.414559    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:16.414576    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:16.426314    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:16.426328    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:16.443512    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:16.443525    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:16.469430    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:16.469444    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:16.483602    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:16.483615    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:16.502019    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:16.502028    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:16.538340    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:16.538359    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:16.543315    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:16.543322    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:16.579103    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:16.579116    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:16.593146    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:16.593156    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:16.605411    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:16.605422    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:16.636412    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:16.636549    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:16.647272    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:16.647361    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:16.657869    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:16.657958    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:16.669311    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:16.669395    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:16.682480    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:16.682561    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:16.700753    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:16.700837    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:16.711186    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:16.711264    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:16.725621    4370 logs.go:276] 0 containers: []
	W0917 02:40:16.725632    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:16.725707    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:16.736473    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:16.736493    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:16.736498    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:16.754393    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:16.754409    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:16.769166    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:16.769176    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:16.783638    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:16.783650    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:16.795207    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:16.795218    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:16.807314    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:16.807329    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:16.842942    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:16.842957    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:16.884953    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:16.884963    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:16.899345    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:16.899359    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:16.920396    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:16.920410    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:16.932780    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:16.932792    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:16.956325    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:16.956338    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:16.979060    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:16.979070    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:17.015637    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:17.015645    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:17.030140    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:17.030151    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:17.041724    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:17.041734    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:17.054766    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:17.054780    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:19.119415    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:19.560831    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:24.121774    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:24.122045    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:24.145572    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:24.145709    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:24.161709    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:24.161802    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:24.173993    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:24.174083    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:24.185004    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:24.185082    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:24.195973    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:24.196058    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:24.211170    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:24.211243    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:24.225622    4234 logs.go:276] 0 containers: []
	W0917 02:40:24.225635    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:24.225711    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:24.236510    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:24.236527    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:24.236534    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:24.248172    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:24.248187    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:24.282455    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:24.282465    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:24.294062    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:24.294075    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:24.305838    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:24.305848    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:24.310156    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:24.310164    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:24.324548    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:24.324558    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:24.338373    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:24.338384    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:24.350793    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:24.350803    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:24.368466    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:24.368477    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:24.392820    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:24.392828    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:24.407645    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:24.407656    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:24.419367    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:24.419377    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:24.455046    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:24.455057    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:24.469213    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:24.469227    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:24.563112    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:24.563239    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:24.573571    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:24.573662    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:24.584657    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:24.584746    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:24.594822    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:24.594904    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:24.606067    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:24.606146    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:24.621236    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:24.621313    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:24.636967    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:24.637055    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:24.646881    4370 logs.go:276] 0 containers: []
	W0917 02:40:24.646894    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:24.646968    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:24.657538    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:24.657554    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:24.657559    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:24.671819    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:24.671829    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:24.689073    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:24.689085    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:24.703038    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:24.703049    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:24.714307    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:24.714319    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:24.738147    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:24.738155    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:24.776424    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:24.776432    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:24.790028    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:24.790039    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:24.801528    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:24.801539    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:24.817055    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:24.817066    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:24.828555    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:24.828566    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:24.866774    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:24.866784    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:24.880680    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:24.880692    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:24.892018    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:24.892030    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:24.896336    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:24.896343    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:24.934334    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:24.934346    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:24.946103    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:24.946114    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:27.469171    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:26.983042    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:32.471478    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:32.471647    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:32.484056    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:32.484146    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:32.494767    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:32.494842    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:32.505474    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:32.505559    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:32.516620    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:32.516702    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:32.526611    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:32.526690    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:32.537106    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:32.537197    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:32.547380    4370 logs.go:276] 0 containers: []
	W0917 02:40:32.547391    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:32.547465    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:32.557711    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:32.557744    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:32.557751    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:32.563249    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:32.563261    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:32.577181    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:32.577190    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:32.588208    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:32.588218    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:32.610440    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:32.610448    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:32.647503    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:32.647511    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:32.661289    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:32.661298    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:32.673011    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:32.673022    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:32.690168    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:32.690180    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:32.702710    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:32.702729    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:32.738702    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:32.738714    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:32.776900    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:32.776912    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:32.791182    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:32.791192    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:32.803434    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:32.803448    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:32.815450    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:32.815462    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:32.829863    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:32.829873    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:32.850893    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:32.850904    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:31.985337    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:31.985511    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:31.996126    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:31.996217    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:32.006637    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:32.006726    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:32.017110    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:32.017196    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:32.028298    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:32.028386    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:32.039777    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:32.039858    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:32.050662    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:32.050752    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:32.064007    4234 logs.go:276] 0 containers: []
	W0917 02:40:32.064019    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:32.064084    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:32.083897    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:32.083915    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:32.083920    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:32.109248    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:32.109256    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:32.145535    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:32.145546    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:32.159233    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:32.159244    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:32.170996    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:32.171005    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:32.182507    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:32.182516    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:32.196471    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:32.196481    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:32.207718    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:32.207727    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:32.222250    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:32.222260    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:32.233830    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:32.233839    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:32.238325    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:32.238333    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:32.268461    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:32.268480    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:32.286701    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:32.286712    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:32.301857    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:32.301866    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:32.337027    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:32.337043    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:34.851581    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:35.367886    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:39.853905    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:39.854153    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:39.877789    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:39.877902    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:39.892270    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:39.892366    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:39.905246    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:39.905331    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:39.922054    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:39.922136    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:39.935426    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:39.935511    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:39.946353    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:39.946444    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:39.961233    4234 logs.go:276] 0 containers: []
	W0917 02:40:39.961244    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:39.961317    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:39.972079    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:39.972098    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:39.972107    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:40.012880    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:40.012891    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:40.037017    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:40.037026    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:40.048316    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:40.048328    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:40.082540    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:40.082550    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:40.101398    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:40.101408    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:40.113241    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:40.113252    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:40.125258    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:40.125268    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:40.129848    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:40.129857    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:40.142251    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:40.142261    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:40.162970    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:40.162983    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:40.180084    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:40.180094    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:40.201854    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:40.201868    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:40.213067    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:40.213077    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:40.225123    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:40.225137    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:40.370161    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:40.370352    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:40.383634    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:40.383723    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:40.394419    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:40.394511    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:40.405085    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:40.405166    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:40.420454    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:40.420540    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:40.433922    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:40.434001    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:40.444670    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:40.444750    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:40.455158    4370 logs.go:276] 0 containers: []
	W0917 02:40:40.455171    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:40.455245    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:40.469942    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:40.469959    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:40.469964    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:40.510652    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:40.510664    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:40.525375    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:40.525384    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:40.545933    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:40.545944    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:40.558212    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:40.558225    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:40.562920    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:40.562928    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:40.599211    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:40.599223    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:40.615375    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:40.615387    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:40.636780    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:40.636794    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:40.648231    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:40.648246    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:40.661851    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:40.661862    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:40.675838    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:40.675851    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:40.688406    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:40.688418    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:40.727621    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:40.727639    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:40.739700    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:40.739711    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:40.755057    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:40.755070    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:40.767078    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:40.767089    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:43.293288    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:42.738980    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:48.295576    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:48.295763    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:48.310721    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:48.310818    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:48.322940    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:48.323028    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:48.334305    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:48.334398    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:48.344980    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:48.345063    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:48.355706    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:48.355795    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:48.366719    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:48.366802    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:48.376807    4370 logs.go:276] 0 containers: []
	W0917 02:40:48.376819    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:48.376884    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:48.387686    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:48.387703    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:48.387708    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:48.399888    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:48.399898    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:48.421642    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:48.421648    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:48.433398    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:48.433409    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:48.452080    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:48.452090    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:48.463927    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:48.463936    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:48.481163    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:48.481171    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:48.496014    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:48.496025    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:48.500497    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:48.500505    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:48.539260    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:48.539270    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:48.554198    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:48.554208    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:48.568997    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:48.569007    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:48.582771    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:48.582780    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:48.617925    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:48.617935    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:48.632081    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:48.632091    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:48.654540    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:48.654550    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:48.669604    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:48.669619    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:47.741419    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:47.741656    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:47.764112    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:47.764220    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:47.778759    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:47.778864    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:47.791626    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:47.791709    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:47.802745    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:47.802831    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:47.814766    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:47.814851    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:47.838956    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:47.839045    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:47.850568    4234 logs.go:276] 0 containers: []
	W0917 02:40:47.850579    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:47.850643    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:47.863345    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:47.863363    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:47.863368    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:47.875315    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:47.875324    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:47.887080    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:47.887091    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:47.923086    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:47.923099    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:47.937428    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:47.937439    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:47.949365    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:47.949378    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:47.963648    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:47.963662    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:47.968651    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:47.968658    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:47.980799    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:47.980811    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:48.005027    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:48.005043    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:48.017790    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:48.017807    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:48.054897    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:48.054906    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:48.073934    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:48.073945    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:48.085406    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:48.085421    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:48.109617    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:48.109626    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:50.631785    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:51.209543    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:55.634119    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:55.634417    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:55.656814    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:40:55.656954    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:55.672413    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:40:55.672497    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:55.684937    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:40:55.685026    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:55.696213    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:40:55.696304    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:55.714710    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:40:55.714788    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:55.726176    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:40:55.726250    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:55.736961    4234 logs.go:276] 0 containers: []
	W0917 02:40:55.736972    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:55.737041    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:55.747809    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:40:55.747833    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:55.747839    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:55.752800    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:55.752809    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:55.789320    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:40:55.789332    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:40:55.801188    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:40:55.801200    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:40:55.816859    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:40:55.816869    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:40:55.829201    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:40:55.829214    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:40:55.845165    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:40:55.845176    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:40:55.863310    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:40:55.863325    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:55.874978    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:55.874989    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:55.910920    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:40:55.910928    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:40:55.924831    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:40:55.924840    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:40:55.937535    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:55.937547    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:55.962710    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:40:55.962718    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:40:55.974350    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:40:55.974359    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:40:55.986246    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:40:55.986256    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:40:56.211871    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:56.212035    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:56.224879    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:56.224975    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:56.238424    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:56.238508    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:56.249019    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:56.249102    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:56.258754    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:56.258828    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:56.269443    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:56.269532    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:56.279419    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:56.279499    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:56.289553    4370 logs.go:276] 0 containers: []
	W0917 02:40:56.289565    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:56.289633    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:56.300094    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:56.300111    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:56.300116    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:56.339578    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:56.339588    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:56.373525    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:56.373536    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:56.385525    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:56.385537    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:56.404723    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:56.404735    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:56.408793    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:56.408800    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:56.425844    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:56.425854    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:56.438223    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:56.438233    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:56.452237    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:56.452246    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:56.466285    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:56.466300    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:56.481437    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:56.481450    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:56.502099    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:56.502109    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:56.517068    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:56.517079    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:56.539762    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:56.539768    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:56.577458    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:56.577471    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:56.592034    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:56.592045    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:56.603858    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:56.603870    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:59.118189    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:58.512415    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:04.120348    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:04.120463    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:04.131707    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:41:04.131792    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:04.142545    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:41:04.142632    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:04.159548    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:41:04.159637    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:04.170092    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:41:04.170186    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:04.180744    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:41:04.180822    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:04.191218    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:41:04.191306    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:04.201664    4370 logs.go:276] 0 containers: []
	W0917 02:41:04.201682    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:04.201764    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:04.212120    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:41:04.212137    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:41:04.212143    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:41:04.224158    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:41:04.224170    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:41:04.245139    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:41:04.245151    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:41:04.256339    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:41:04.256349    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:41:04.273131    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:41:04.273141    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:03.514723    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:03.514935    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:03.532155    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:03.532241    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:03.543730    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:03.543825    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:03.554455    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:03.554530    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:03.564849    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:03.564936    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:03.575238    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:03.575324    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:03.585564    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:03.585651    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:03.595144    4234 logs.go:276] 0 containers: []
	W0917 02:41:03.595155    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:03.595220    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:03.605662    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:03.605680    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:03.605685    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:03.641522    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:03.641531    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:03.665985    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:03.665996    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:03.677579    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:03.677591    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:03.689674    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:03.689689    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:03.726063    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:03.726077    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:03.741001    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:03.741014    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:03.753015    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:03.753026    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:03.766507    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:03.766517    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:03.791026    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:03.791041    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:03.808883    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:03.808897    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:03.821059    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:03.821075    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:03.825533    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:03.825539    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:03.839426    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:03.839439    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:03.856008    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:03.856021    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:06.370305    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:04.286293    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:41:04.286304    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:41:04.301858    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:41:04.301868    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:41:04.345531    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:41:04.345544    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:41:04.359481    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:41:04.359491    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:41:04.371140    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:04.371151    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:04.408773    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:41:04.408780    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:41:04.422915    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:41:04.422931    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:41:04.434470    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:04.434481    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:04.458416    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:04.458427    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:04.462616    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:04.462625    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:04.497479    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:41:04.497493    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:41:04.509194    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:41:04.509205    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:41:07.026014    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:11.372914    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:11.373181    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:11.395443    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:11.395578    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:11.414366    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:11.414460    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:11.426550    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:11.426645    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:11.436892    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:11.436980    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:11.447620    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:11.447710    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:11.458699    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:11.458767    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:11.469171    4234 logs.go:276] 0 containers: []
	W0917 02:41:11.469186    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:11.469254    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:11.480192    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:11.480209    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:11.480215    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:11.492678    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:11.492688    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:11.528995    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:11.529013    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:11.541540    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:11.541555    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:11.556907    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:11.556916    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:11.568767    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:11.568778    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:11.586016    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:11.586026    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:11.590685    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:11.590691    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:11.602199    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:11.602209    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:11.613869    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:11.613879    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:11.625438    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:11.625452    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:11.643810    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:11.643820    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:11.657772    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:11.657782    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:12.028048    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:12.028118    4370 kubeadm.go:597] duration metric: took 4m4.002254959s to restartPrimaryControlPlane
	W0917 02:41:12.028166    4370 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 02:41:12.028183    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0917 02:41:13.028261    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:41:13.033574    4370 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 02:41:13.036467    4370 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 02:41:13.039044    4370 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 02:41:13.039051    4370 kubeadm.go:157] found existing configuration files:
	
	I0917 02:41:13.039080    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0917 02:41:13.041359    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 02:41:13.041381    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 02:41:13.044079    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0917 02:41:13.046537    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 02:41:13.046564    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 02:41:13.049330    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0917 02:41:13.052507    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 02:41:13.052534    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 02:41:13.055239    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0917 02:41:13.057555    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 02:41:13.057579    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 02:41:13.060728    4370 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 02:41:13.079571    4370 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0917 02:41:13.079606    4370 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 02:41:13.128134    4370 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 02:41:13.128186    4370 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 02:41:13.128228    4370 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 02:41:13.178739    4370 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 02:41:13.182964    4370 out.go:235]   - Generating certificates and keys ...
	I0917 02:41:13.182998    4370 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 02:41:13.183031    4370 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 02:41:13.183067    4370 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 02:41:13.183098    4370 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 02:41:13.183142    4370 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 02:41:13.183170    4370 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 02:41:13.183201    4370 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 02:41:13.183231    4370 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 02:41:13.183270    4370 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 02:41:13.183335    4370 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 02:41:13.183357    4370 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 02:41:13.183404    4370 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 02:41:13.265303    4370 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 02:41:13.482428    4370 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 02:41:13.602433    4370 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 02:41:13.667495    4370 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 02:41:13.696680    4370 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 02:41:13.696734    4370 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 02:41:13.696756    4370 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 02:41:13.771308    4370 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 02:41:13.775536    4370 out.go:235]   - Booting up control plane ...
	I0917 02:41:13.775602    4370 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 02:41:13.775642    4370 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 02:41:13.775685    4370 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 02:41:13.775744    4370 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 02:41:13.775880    4370 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 02:41:11.681272    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:11.681280    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:11.717971    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:11.717981    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:14.231940    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:18.274698    4370 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501391 seconds
	I0917 02:41:18.274768    4370 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 02:41:18.278760    4370 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 02:41:18.803486    4370 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 02:41:18.803895    4370 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-288000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 02:41:19.307183    4370 kubeadm.go:310] [bootstrap-token] Using token: 4vsdjq.4qj5uidod7poi6do
	I0917 02:41:19.310970    4370 out.go:235]   - Configuring RBAC rules ...
	I0917 02:41:19.311037    4370 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 02:41:19.311084    4370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 02:41:19.315594    4370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 02:41:19.317049    4370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 02:41:19.318035    4370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 02:41:19.319115    4370 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 02:41:19.322539    4370 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 02:41:19.477319    4370 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 02:41:19.712895    4370 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 02:41:19.713060    4370 kubeadm.go:310] 
	I0917 02:41:19.713094    4370 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 02:41:19.713096    4370 kubeadm.go:310] 
	I0917 02:41:19.713143    4370 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 02:41:19.713147    4370 kubeadm.go:310] 
	I0917 02:41:19.713162    4370 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 02:41:19.713221    4370 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 02:41:19.713253    4370 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 02:41:19.713258    4370 kubeadm.go:310] 
	I0917 02:41:19.713286    4370 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 02:41:19.713291    4370 kubeadm.go:310] 
	I0917 02:41:19.713314    4370 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 02:41:19.713317    4370 kubeadm.go:310] 
	I0917 02:41:19.713343    4370 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 02:41:19.713380    4370 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 02:41:19.713422    4370 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 02:41:19.713431    4370 kubeadm.go:310] 
	I0917 02:41:19.713476    4370 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 02:41:19.713517    4370 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 02:41:19.713521    4370 kubeadm.go:310] 
	I0917 02:41:19.713560    4370 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4vsdjq.4qj5uidod7poi6do \
	I0917 02:41:19.713613    4370 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3105cdadd1e1eaa420c61face26906cf5212dd9c9efeb8ef9725bc0a50fd268d \
	I0917 02:41:19.713627    4370 kubeadm.go:310] 	--control-plane 
	I0917 02:41:19.713631    4370 kubeadm.go:310] 
	I0917 02:41:19.713683    4370 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 02:41:19.713686    4370 kubeadm.go:310] 
	I0917 02:41:19.713728    4370 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4vsdjq.4qj5uidod7poi6do \
	I0917 02:41:19.713779    4370 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3105cdadd1e1eaa420c61face26906cf5212dd9c9efeb8ef9725bc0a50fd268d 
	I0917 02:41:19.714024    4370 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 02:41:19.714034    4370 cni.go:84] Creating CNI manager for ""
	I0917 02:41:19.714065    4370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:41:19.721100    4370 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 02:41:19.725125    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 02:41:19.728291    4370 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 02:41:19.733012    4370 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 02:41:19.733072    4370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 02:41:19.733086    4370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-288000 minikube.k8s.io/updated_at=2024_09_17T02_41_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=stopped-upgrade-288000 minikube.k8s.io/primary=true
	I0917 02:41:19.775257    4370 ops.go:34] apiserver oom_adj: -16
	I0917 02:41:19.775319    4370 kubeadm.go:1113] duration metric: took 42.289209ms to wait for elevateKubeSystemPrivileges
	I0917 02:41:19.775331    4370 kubeadm.go:394] duration metric: took 4m11.762931708s to StartCluster
	I0917 02:41:19.775343    4370 settings.go:142] acquiring lock: {Name:mk2d861f3b7e502753ec34b4d96136a66d57e5dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:41:19.775439    4370 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:41:19.775908    4370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/kubeconfig: {Name:mkb79e559d17024b096623143f764244ebf5b237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:41:19.776118    4370 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:41:19.776204    4370 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:41:19.776182    4370 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:41:19.776246    4370 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-288000"
	I0917 02:41:19.776255    4370 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-288000"
	W0917 02:41:19.776261    4370 addons.go:243] addon storage-provisioner should already be in state true
	I0917 02:41:19.776264    4370 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-288000"
	I0917 02:41:19.776270    4370 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-288000"
	I0917 02:41:19.776272    4370 host.go:66] Checking if "stopped-upgrade-288000" exists ...
	I0917 02:41:19.777235    4370 kapi.go:59] client config for stopped-upgrade-288000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106395800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:41:19.777358    4370 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-288000"
	W0917 02:41:19.777363    4370 addons.go:243] addon default-storageclass should already be in state true
	I0917 02:41:19.777376    4370 host.go:66] Checking if "stopped-upgrade-288000" exists ...
	I0917 02:41:19.780103    4370 out.go:177] * Verifying Kubernetes components...
	I0917 02:41:19.780440    4370 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 02:41:19.784323    4370 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 02:41:19.784330    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	I0917 02:41:19.788001    4370 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:41:19.234263    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:19.234422    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:19.246390    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:19.246474    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:19.256841    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:19.256933    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:19.269450    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:19.269545    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:19.280458    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:19.280535    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:19.290763    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:19.290846    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:19.307659    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:19.307746    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:19.323374    4234 logs.go:276] 0 containers: []
	W0917 02:41:19.323386    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:19.323463    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:19.335042    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:19.335060    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:19.335066    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:19.354854    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:19.354874    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:19.367405    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:19.367418    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:19.404784    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:19.404797    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:19.441717    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:19.441730    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:19.456601    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:19.456620    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:19.470394    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:19.470409    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:19.486748    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:19.486765    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:19.491610    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:19.491621    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:19.506819    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:19.506838    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:19.533398    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:19.533420    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:19.546218    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:19.546230    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:19.559944    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:19.559958    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:19.573494    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:19.573509    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:19.586598    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:19.586615    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:19.792134    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:41:19.793202    4370 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 02:41:19.793207    4370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 02:41:19.793211    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	I0917 02:41:19.867682    4370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:41:19.873159    4370 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:41:19.873215    4370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:41:19.877163    4370 api_server.go:72] duration metric: took 101.034708ms to wait for apiserver process to appear ...
	I0917 02:41:19.877170    4370 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:41:19.877177    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:19.903449    4370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 02:41:19.919049    4370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 02:41:20.235871    4370 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 02:41:20.235882    4370 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 02:41:22.101800    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:24.879243    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:24.879302    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:27.104161    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:27.104343    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:27.116694    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:27.116786    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:27.128061    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:27.128151    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:27.138730    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:27.138820    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:27.149848    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:27.149929    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:27.160478    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:27.160561    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:27.171614    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:27.171693    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:27.182714    4234 logs.go:276] 0 containers: []
	W0917 02:41:27.182725    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:27.182800    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:27.193376    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:27.193396    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:27.193401    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:27.209921    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:27.209932    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:27.224594    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:27.224604    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:27.235810    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:27.235821    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:27.248702    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:27.248718    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:27.266423    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:27.266432    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:27.270865    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:27.270874    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:27.285251    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:27.285260    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:27.299612    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:27.299625    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:27.313029    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:27.313037    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:27.338192    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:27.338200    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:27.373498    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:27.373507    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:27.409881    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:27.409892    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:27.421815    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:27.421825    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:27.436979    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:27.436992    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:29.951096    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:29.879740    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:29.879777    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:34.953306    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:34.953452    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:34.964528    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:34.964600    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:34.975094    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:34.975181    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:34.986566    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:34.986650    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:34.997153    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:34.997232    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:35.008109    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:35.008200    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:35.019176    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:35.019246    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:35.029388    4234 logs.go:276] 0 containers: []
	W0917 02:41:35.029400    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:35.029472    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:35.044365    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:35.044383    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:35.044389    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:35.049523    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:35.049531    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:35.061222    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:35.061231    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:35.073198    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:35.073207    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:35.088711    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:35.088729    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:35.101414    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:35.101425    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:35.120064    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:35.120073    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:35.132242    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:35.132256    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:35.143938    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:35.143950    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:35.155784    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:35.155794    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:35.191556    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:35.191572    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:35.228757    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:35.228769    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:35.245177    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:35.245189    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:35.263593    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:35.263604    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:35.274872    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:35.274884    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:34.880147    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:34.880171    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:37.800616    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:39.880612    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:39.880636    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:42.802085    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:42.802286    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:42.823495    4234 logs.go:276] 1 containers: [16d61eec746b]
	I0917 02:41:42.823610    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:42.838867    4234 logs.go:276] 1 containers: [838757ec9133]
	I0917 02:41:42.838958    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:42.850913    4234 logs.go:276] 4 containers: [49edb3891c37 8b0b66ddf046 1f429c6c263e 840bcd2c52c8]
	I0917 02:41:42.851018    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:42.861619    4234 logs.go:276] 1 containers: [fbff6d9caced]
	I0917 02:41:42.861690    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:42.872323    4234 logs.go:276] 1 containers: [58b759fff751]
	I0917 02:41:42.872400    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:42.883763    4234 logs.go:276] 1 containers: [c6867b4e117b]
	I0917 02:41:42.883843    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:42.894196    4234 logs.go:276] 0 containers: []
	W0917 02:41:42.894206    4234 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:42.894266    4234 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:42.904456    4234 logs.go:276] 1 containers: [134b5885cc44]
	I0917 02:41:42.904471    4234 logs.go:123] Gathering logs for coredns [49edb3891c37] ...
	I0917 02:41:42.904478    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49edb3891c37"
	I0917 02:41:42.916812    4234 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:42.916821    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:42.941939    4234 logs.go:123] Gathering logs for container status ...
	I0917 02:41:42.941953    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:42.954633    4234 logs.go:123] Gathering logs for coredns [840bcd2c52c8] ...
	I0917 02:41:42.954648    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 840bcd2c52c8"
	I0917 02:41:42.966462    4234 logs.go:123] Gathering logs for kube-proxy [58b759fff751] ...
	I0917 02:41:42.966471    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58b759fff751"
	I0917 02:41:42.978985    4234 logs.go:123] Gathering logs for storage-provisioner [134b5885cc44] ...
	I0917 02:41:42.978996    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 134b5885cc44"
	I0917 02:41:42.999287    4234 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:42.999298    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:43.004373    4234 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:43.004380    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:43.040867    4234 logs.go:123] Gathering logs for kube-controller-manager [c6867b4e117b] ...
	I0917 02:41:43.040877    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c6867b4e117b"
	I0917 02:41:43.059267    4234 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:43.059277    4234 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:43.094412    4234 logs.go:123] Gathering logs for kube-apiserver [16d61eec746b] ...
	I0917 02:41:43.094422    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16d61eec746b"
	I0917 02:41:43.109765    4234 logs.go:123] Gathering logs for etcd [838757ec9133] ...
	I0917 02:41:43.109775    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838757ec9133"
	I0917 02:41:43.123735    4234 logs.go:123] Gathering logs for coredns [8b0b66ddf046] ...
	I0917 02:41:43.123745    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b0b66ddf046"
	I0917 02:41:43.136533    4234 logs.go:123] Gathering logs for coredns [1f429c6c263e] ...
	I0917 02:41:43.136544    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f429c6c263e"
	I0917 02:41:43.148990    4234 logs.go:123] Gathering logs for kube-scheduler [fbff6d9caced] ...
	I0917 02:41:43.149002    4234 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbff6d9caced"
	I0917 02:41:45.672122    4234 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:44.881215    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:44.881236    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:49.881988    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:49.882019    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0917 02:41:50.238041    4370 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0917 02:41:50.243406    4370 out.go:177] * Enabled addons: storage-provisioner
	I0917 02:41:50.672317    4234 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:50.676038    4234 out.go:201] 
	W0917 02:41:50.678782    4234 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0917 02:41:50.678791    4234 out.go:270] * 
	W0917 02:41:50.679524    4234 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:41:50.690835    4234 out.go:201] 
	I0917 02:41:50.251257    4370 addons.go:510] duration metric: took 30.475315334s for enable addons: enabled=[storage-provisioner]
	I0917 02:41:54.882966    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:54.883008    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:59.884335    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:59.884358    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-09-17 09:32:46 UTC, ends at Tue 2024-09-17 09:42:06 UTC. --
	Sep 17 09:41:50 running-upgrade-202000 dockerd[3349]: time="2024-09-17T09:41:50.914607471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 17 09:41:50 running-upgrade-202000 dockerd[3349]: time="2024-09-17T09:41:50.914752177Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ce21705e0ec45f7575f6f42beda7ed3b77d12f07c7e12cfc53621163f522fe56 pid=18912 runtime=io.containerd.runc.v2
	Sep 17 09:41:51 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:51Z" level=error msg="ContainerStats resp: {0x4000358600 linux}"
	Sep 17 09:41:51 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:51Z" level=error msg="ContainerStats resp: {0x40006ee680 linux}"
	Sep 17 09:41:51 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 17 09:41:52 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:52Z" level=error msg="ContainerStats resp: {0x4000a34800 linux}"
	Sep 17 09:41:53 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:53Z" level=error msg="ContainerStats resp: {0x4000a35080 linux}"
	Sep 17 09:41:53 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:53Z" level=error msg="ContainerStats resp: {0x4000997d00 linux}"
	Sep 17 09:41:53 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:53Z" level=error msg="ContainerStats resp: {0x40008203c0 linux}"
	Sep 17 09:41:53 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:53Z" level=error msg="ContainerStats resp: {0x4000a35f80 linux}"
	Sep 17 09:41:53 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:53Z" level=error msg="ContainerStats resp: {0x40009c2400 linux}"
	Sep 17 09:41:53 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:53Z" level=error msg="ContainerStats resp: {0x4000821400 linux}"
	Sep 17 09:41:53 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:53Z" level=error msg="ContainerStats resp: {0x40009c2cc0 linux}"
	Sep 17 09:41:56 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:41:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 17 09:42:01 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:42:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 17 09:42:03 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:42:03Z" level=error msg="ContainerStats resp: {0x400097e6c0 linux}"
	Sep 17 09:42:03 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:42:03Z" level=error msg="ContainerStats resp: {0x400097edc0 linux}"
	Sep 17 09:42:04 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:42:04Z" level=error msg="ContainerStats resp: {0x4000416b80 linux}"
	Sep 17 09:42:05 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:42:05Z" level=error msg="ContainerStats resp: {0x40009c2c00 linux}"
	Sep 17 09:42:05 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:42:05Z" level=error msg="ContainerStats resp: {0x40004a9080 linux}"
	Sep 17 09:42:05 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:42:05Z" level=error msg="ContainerStats resp: {0x40009c3300 linux}"
	Sep 17 09:42:05 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:42:05Z" level=error msg="ContainerStats resp: {0x40004a9680 linux}"
	Sep 17 09:42:05 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:42:05Z" level=error msg="ContainerStats resp: {0x40009c2400 linux}"
	Sep 17 09:42:05 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:42:05Z" level=error msg="ContainerStats resp: {0x40009c2840 linux}"
	Sep 17 09:42:05 running-upgrade-202000 cri-dockerd[3188]: time="2024-09-17T09:42:05Z" level=error msg="ContainerStats resp: {0x40009c2c80 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ce21705e0ec45       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   e9df7a10eb7f4
	3e3f025925126       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   f6974ccf69224
	49edb3891c371       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   e9df7a10eb7f4
	8b0b66ddf0466       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   f6974ccf69224
	134b5885cc44f       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   40c6a1d19d85e
	58b759fff7512       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   6d4d6f2f52f73
	c6867b4e117bb       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   8ceab3f63d8c4
	838757ec9133f       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   d112cf29954b1
	16d61eec746bf       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   5446aa5ad07f0
	fbff6d9caced7       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   6f232d7db347f
	
	
	==> coredns [3e3f02592512] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6652837249960385342.2203879801571095561. HINFO: read udp 10.244.0.2:55925->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6652837249960385342.2203879801571095561. HINFO: read udp 10.244.0.2:59397->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6652837249960385342.2203879801571095561. HINFO: read udp 10.244.0.2:52117->10.0.2.3:53: i/o timeout
	
	
	==> coredns [49edb3891c37] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6544975517496467015.6093256693660113792. HINFO: read udp 10.244.0.3:41872->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6544975517496467015.6093256693660113792. HINFO: read udp 10.244.0.3:46764->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6544975517496467015.6093256693660113792. HINFO: read udp 10.244.0.3:52294->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6544975517496467015.6093256693660113792. HINFO: read udp 10.244.0.3:56333->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6544975517496467015.6093256693660113792. HINFO: read udp 10.244.0.3:48124->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6544975517496467015.6093256693660113792. HINFO: read udp 10.244.0.3:38382->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6544975517496467015.6093256693660113792. HINFO: read udp 10.244.0.3:40360->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6544975517496467015.6093256693660113792. HINFO: read udp 10.244.0.3:36707->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6544975517496467015.6093256693660113792. HINFO: read udp 10.244.0.3:33658->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6544975517496467015.6093256693660113792. HINFO: read udp 10.244.0.3:52425->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8b0b66ddf046] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8899133420346981020.5765540865692375733. HINFO: read udp 10.244.0.2:44576->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8899133420346981020.5765540865692375733. HINFO: read udp 10.244.0.2:46217->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8899133420346981020.5765540865692375733. HINFO: read udp 10.244.0.2:50365->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8899133420346981020.5765540865692375733. HINFO: read udp 10.244.0.2:58010->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8899133420346981020.5765540865692375733. HINFO: read udp 10.244.0.2:34286->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8899133420346981020.5765540865692375733. HINFO: read udp 10.244.0.2:60469->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8899133420346981020.5765540865692375733. HINFO: read udp 10.244.0.2:33534->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8899133420346981020.5765540865692375733. HINFO: read udp 10.244.0.2:47186->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8899133420346981020.5765540865692375733. HINFO: read udp 10.244.0.2:50329->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8899133420346981020.5765540865692375733. HINFO: read udp 10.244.0.2:35939->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ce21705e0ec4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8276700449994436670.4200101197187376080. HINFO: read udp 10.244.0.3:33433->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8276700449994436670.4200101197187376080. HINFO: read udp 10.244.0.3:44013->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8276700449994436670.4200101197187376080. HINFO: read udp 10.244.0.3:33780->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-202000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-202000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61
	                    minikube.k8s.io/name=running-upgrade-202000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T02_37_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 09:37:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-202000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 09:42:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 09:37:49 +0000   Tue, 17 Sep 2024 09:37:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 09:37:49 +0000   Tue, 17 Sep 2024 09:37:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 09:37:49 +0000   Tue, 17 Sep 2024 09:37:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 09:37:49 +0000   Tue, 17 Sep 2024 09:37:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-202000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 e4a4ba4f2dfa46e5b50ea90cdb0fd43d
	  System UUID:                e4a4ba4f2dfa46e5b50ea90cdb0fd43d
	  Boot ID:                    aa2a0de6-7bae-4221-8e34-9a7103ee1ea1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dv5qn                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-dx2zj                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-202000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-202000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-202000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-gs9k4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-202000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-202000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-202000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-202000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-202000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-202000 event: Registered Node running-upgrade-202000 in Controller
	
	
	==> dmesg <==
	[  +1.674611] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.084224] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.071371] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +0.171067] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.077816] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[Sep17 09:33] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +0.206159] kauditd_printk_skb: 92 callbacks suppressed
	[  +7.943154] systemd-fstab-generator[1836]: Ignoring "noauto" for root device
	[  +2.857389] systemd-fstab-generator[2198]: Ignoring "noauto" for root device
	[  +0.139240] systemd-fstab-generator[2232]: Ignoring "noauto" for root device
	[  +0.096272] systemd-fstab-generator[2243]: Ignoring "noauto" for root device
	[  +0.092070] systemd-fstab-generator[2256]: Ignoring "noauto" for root device
	[ +17.377883] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.208832] systemd-fstab-generator[3143]: Ignoring "noauto" for root device
	[  +0.078845] systemd-fstab-generator[3156]: Ignoring "noauto" for root device
	[  +0.077775] systemd-fstab-generator[3167]: Ignoring "noauto" for root device
	[  +0.088754] systemd-fstab-generator[3181]: Ignoring "noauto" for root device
	[  +2.288832] systemd-fstab-generator[3335]: Ignoring "noauto" for root device
	[  +4.105774] systemd-fstab-generator[3734]: Ignoring "noauto" for root device
	[  +1.259807] systemd-fstab-generator[4029]: Ignoring "noauto" for root device
	[ +17.612537] kauditd_printk_skb: 68 callbacks suppressed
	[Sep17 09:37] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.146691] systemd-fstab-generator[12064]: Ignoring "noauto" for root device
	[  +5.642642] systemd-fstab-generator[12650]: Ignoring "noauto" for root device
	[  +0.454808] systemd-fstab-generator[12780]: Ignoring "noauto" for root device
	
	
	==> etcd [838757ec9133] <==
	{"level":"info","ts":"2024-09-17T09:37:45.067Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T09:37:45.071Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T09:37:45.071Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T09:37:45.067Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-17T09:37:45.071Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-17T09:37:45.071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-17T09:37:45.071Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-17T09:37:45.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-17T09:37:45.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-17T09:37:45.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-17T09:37:45.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-17T09:37:45.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-17T09:37:45.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-17T09:37:45.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-17T09:37:45.628Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:37:45.629Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:37:45.629Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:37:45.629Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T09:37:45.629Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:37:45.629Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-202000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T09:37:45.629Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T09:37:45.630Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T09:37:45.630Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T09:37:45.630Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-17T09:37:45.631Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:42:06 up 9 min,  0 users,  load average: 0.31, 0.36, 0.20
	Linux running-upgrade-202000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [16d61eec746b] <==
	I0917 09:37:46.819732       1 controller.go:611] quota admission added evaluator for: namespaces
	I0917 09:37:46.850789       1 cache.go:39] Caches are synced for autoregister controller
	I0917 09:37:46.850981       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0917 09:37:46.850791       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0917 09:37:46.851087       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 09:37:46.851104       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0917 09:37:46.867353       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0917 09:37:47.583624       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0917 09:37:47.753377       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0917 09:37:47.754631       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0917 09:37:47.754637       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 09:37:47.889867       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 09:37:47.903562       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 09:37:47.924293       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0917 09:37:47.926054       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0917 09:37:47.926376       1 controller.go:611] quota admission added evaluator for: endpoints
	I0917 09:37:47.927638       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 09:37:48.888568       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0917 09:37:49.435688       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0917 09:37:49.439334       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0917 09:37:49.466578       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0917 09:37:49.510303       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 09:38:02.446363       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0917 09:38:02.594720       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0917 09:38:03.204433       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [c6867b4e117b] <==
	W0917 09:38:01.895096       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="running-upgrade-202000" does not exist
	I0917 09:38:01.896447       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0917 09:38:01.903984       1 shared_informer.go:262] Caches are synced for attach detach
	I0917 09:38:01.905247       1 shared_informer.go:262] Caches are synced for node
	I0917 09:38:01.905306       1 range_allocator.go:173] Starting range CIDR allocator
	I0917 09:38:01.905341       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0917 09:38:01.905432       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0917 09:38:01.912940       1 range_allocator.go:374] Set node running-upgrade-202000 PodCIDR to [10.244.0.0/24]
	I0917 09:38:01.927752       1 shared_informer.go:262] Caches are synced for daemon sets
	I0917 09:38:01.942314       1 shared_informer.go:262] Caches are synced for TTL
	I0917 09:38:01.989652       1 shared_informer.go:262] Caches are synced for taint
	I0917 09:38:01.989730       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0917 09:38:01.989764       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-202000. Assuming now as a timestamp.
	I0917 09:38:01.989798       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0917 09:38:01.989801       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0917 09:38:01.989941       1 event.go:294] "Event occurred" object="running-upgrade-202000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-202000 event: Registered Node running-upgrade-202000 in Controller"
	I0917 09:38:01.993104       1 shared_informer.go:262] Caches are synced for GC
	I0917 09:38:01.993617       1 shared_informer.go:262] Caches are synced for persistent volume
	I0917 09:38:02.356695       1 shared_informer.go:262] Caches are synced for garbage collector
	I0917 09:38:02.442117       1 shared_informer.go:262] Caches are synced for garbage collector
	I0917 09:38:02.442126       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0917 09:38:02.447942       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0917 09:38:02.597702       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gs9k4"
	I0917 09:38:02.745626       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dv5qn"
	I0917 09:38:02.754233       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dx2zj"
	
	
	==> kube-proxy [58b759fff751] <==
	I0917 09:38:03.170081       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0917 09:38:03.170360       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0917 09:38:03.170417       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0917 09:38:03.201830       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0917 09:38:03.201839       1 server_others.go:206] "Using iptables Proxier"
	I0917 09:38:03.201863       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0917 09:38:03.201971       1 server.go:661] "Version info" version="v1.24.1"
	I0917 09:38:03.201975       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 09:38:03.202413       1 config.go:444] "Starting node config controller"
	I0917 09:38:03.202420       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0917 09:38:03.202428       1 config.go:317] "Starting service config controller"
	I0917 09:38:03.202429       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0917 09:38:03.202435       1 config.go:226] "Starting endpoint slice config controller"
	I0917 09:38:03.202436       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0917 09:38:03.302723       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0917 09:38:03.302750       1 shared_informer.go:262] Caches are synced for node config
	I0917 09:38:03.302753       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [fbff6d9caced] <==
	W0917 09:37:46.824997       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 09:37:46.825191       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0917 09:37:46.825236       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 09:37:46.825255       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0917 09:37:46.825320       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 09:37:46.825371       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0917 09:37:46.825417       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 09:37:46.825436       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0917 09:37:46.825473       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 09:37:46.825626       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0917 09:37:46.825665       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 09:37:46.825681       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0917 09:37:46.825725       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 09:37:46.825746       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0917 09:37:46.825773       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 09:37:46.825791       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0917 09:37:46.825828       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 09:37:46.825856       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0917 09:37:47.737207       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 09:37:47.737345       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0917 09:37:47.748030       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 09:37:47.748086       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0917 09:37:47.801722       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 09:37:47.801766       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0917 09:37:48.122027       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-09-17 09:32:46 UTC, ends at Tue 2024-09-17 09:42:07 UTC. --
	Sep 17 09:37:51 running-upgrade-202000 kubelet[12656]: E0917 09:37:51.286034   12656 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-202000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-202000"
	Sep 17 09:37:51 running-upgrade-202000 kubelet[12656]: E0917 09:37:51.477702   12656 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-202000\" already exists" pod="kube-system/etcd-running-upgrade-202000"
	Sep 17 09:37:51 running-upgrade-202000 kubelet[12656]: I0917 09:37:51.675833   12656 request.go:601] Waited for 1.115313892s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 17 09:37:51 running-upgrade-202000 kubelet[12656]: E0917 09:37:51.679000   12656 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-202000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-202000"
	Sep 17 09:38:01 running-upgrade-202000 kubelet[12656]: I0917 09:38:01.995165   12656 topology_manager.go:200] "Topology Admit Handler"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.002808   12656 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.002823   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1c2de2d2-f587-4df8-ac70-81ca7450afc7-tmp\") pod \"storage-provisioner\" (UID: \"1c2de2d2-f587-4df8-ac70-81ca7450afc7\") " pod="kube-system/storage-provisioner"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.002836   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfwz6\" (UniqueName: \"kubernetes.io/projected/1c2de2d2-f587-4df8-ac70-81ca7450afc7-kube-api-access-zfwz6\") pod \"storage-provisioner\" (UID: \"1c2de2d2-f587-4df8-ac70-81ca7450afc7\") " pod="kube-system/storage-provisioner"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.003145   12656 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: E0917 09:38:02.106225   12656 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: E0917 09:38:02.106238   12656 projected.go:192] Error preparing data for projected volume kube-api-access-zfwz6 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: E0917 09:38:02.106270   12656 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/1c2de2d2-f587-4df8-ac70-81ca7450afc7-kube-api-access-zfwz6 podName:1c2de2d2-f587-4df8-ac70-81ca7450afc7 nodeName:}" failed. No retries permitted until 2024-09-17 09:38:02.60625714 +0000 UTC m=+13.181605077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zfwz6" (UniqueName: "kubernetes.io/projected/1c2de2d2-f587-4df8-ac70-81ca7450afc7-kube-api-access-zfwz6") pod "storage-provisioner" (UID: "1c2de2d2-f587-4df8-ac70-81ca7450afc7") : configmap "kube-root-ca.crt" not found
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.603792   12656 topology_manager.go:200] "Topology Admit Handler"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.606136   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85045818-2a2d-445f-8115-a919cfa616a3-kube-proxy\") pod \"kube-proxy-gs9k4\" (UID: \"85045818-2a2d-445f-8115-a919cfa616a3\") " pod="kube-system/kube-proxy-gs9k4"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.606151   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85045818-2a2d-445f-8115-a919cfa616a3-xtables-lock\") pod \"kube-proxy-gs9k4\" (UID: \"85045818-2a2d-445f-8115-a919cfa616a3\") " pod="kube-system/kube-proxy-gs9k4"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.606171   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85045818-2a2d-445f-8115-a919cfa616a3-lib-modules\") pod \"kube-proxy-gs9k4\" (UID: \"85045818-2a2d-445f-8115-a919cfa616a3\") " pod="kube-system/kube-proxy-gs9k4"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.606189   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4vvt\" (UniqueName: \"kubernetes.io/projected/85045818-2a2d-445f-8115-a919cfa616a3-kube-api-access-s4vvt\") pod \"kube-proxy-gs9k4\" (UID: \"85045818-2a2d-445f-8115-a919cfa616a3\") " pod="kube-system/kube-proxy-gs9k4"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.750560   12656 topology_manager.go:200] "Topology Admit Handler"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.759281   12656 topology_manager.go:200] "Topology Admit Handler"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.808899   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxw26\" (UniqueName: \"kubernetes.io/projected/a41325ed-09c7-448e-8fda-756fd379e720-kube-api-access-kxw26\") pod \"coredns-6d4b75cb6d-dv5qn\" (UID: \"a41325ed-09c7-448e-8fda-756fd379e720\") " pod="kube-system/coredns-6d4b75cb6d-dv5qn"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.808925   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfs6p\" (UniqueName: \"kubernetes.io/projected/3e9f10c0-5df4-4f8c-bb4c-bf3e817af200-kube-api-access-zfs6p\") pod \"coredns-6d4b75cb6d-dx2zj\" (UID: \"3e9f10c0-5df4-4f8c-bb4c-bf3e817af200\") " pod="kube-system/coredns-6d4b75cb6d-dx2zj"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.808949   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e9f10c0-5df4-4f8c-bb4c-bf3e817af200-config-volume\") pod \"coredns-6d4b75cb6d-dx2zj\" (UID: \"3e9f10c0-5df4-4f8c-bb4c-bf3e817af200\") " pod="kube-system/coredns-6d4b75cb6d-dx2zj"
	Sep 17 09:38:02 running-upgrade-202000 kubelet[12656]: I0917 09:38:02.808960   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a41325ed-09c7-448e-8fda-756fd379e720-config-volume\") pod \"coredns-6d4b75cb6d-dv5qn\" (UID: \"a41325ed-09c7-448e-8fda-756fd379e720\") " pod="kube-system/coredns-6d4b75cb6d-dv5qn"
	Sep 17 09:41:50 running-upgrade-202000 kubelet[12656]: I0917 09:41:50.968796   12656 scope.go:110] "RemoveContainer" containerID="840bcd2c52c8f631dbcabe139c17698ed308214f92888758e1d0b24828c18467"
	Sep 17 09:41:50 running-upgrade-202000 kubelet[12656]: I0917 09:41:50.984370   12656 scope.go:110] "RemoveContainer" containerID="1f429c6c263eacbfd81c9c138e57936ab2d61f8f9bb5a02d2eb01f4eab41afd9"
	
	
	==> storage-provisioner [134b5885cc44] <==
	I0917 09:38:03.173122       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 09:38:03.184110       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 09:38:03.184130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 09:38:03.191003       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 09:38:03.193398       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46c5bbe9-34ff-40e5-9da9-28904b079d58", APIVersion:"v1", ResourceVersion:"357", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-202000_59156438-f1ed-4081-bc6d-6d2918b732ac became leader
	I0917 09:38:03.195274       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-202000_59156438-f1ed-4081-bc6d-6d2918b732ac!
	I0917 09:38:03.295797       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-202000_59156438-f1ed-4081-bc6d-6d2918b732ac!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-202000 -n running-upgrade-202000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-202000 -n running-upgrade-202000: exit status 2 (15.653501292s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-202000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-202000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-202000
--- FAIL: TestRunningBinaryUpgrade (605.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-685000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-685000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.873863292s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-685000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-685000" primary control-plane node in "kubernetes-upgrade-685000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-685000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:35:17.664314    4299 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:35:17.664441    4299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:35:17.664444    4299 out.go:358] Setting ErrFile to fd 2...
	I0917 02:35:17.664447    4299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:35:17.664587    4299 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:35:17.665670    4299 out.go:352] Setting JSON to false
	I0917 02:35:17.681738    4299 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3887,"bootTime":1726561830,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:35:17.681808    4299 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:35:17.687867    4299 out.go:177] * [kubernetes-upgrade-685000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:35:17.695809    4299 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:35:17.695876    4299 notify.go:220] Checking for updates...
	I0917 02:35:17.701838    4299 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:35:17.704837    4299 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:35:17.707805    4299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:35:17.710818    4299 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:35:17.713890    4299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:35:17.717191    4299 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:35:17.717257    4299 config.go:182] Loaded profile config "running-upgrade-202000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:35:17.717303    4299 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:35:17.721743    4299 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:35:17.728822    4299 start.go:297] selected driver: qemu2
	I0917 02:35:17.728829    4299 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:35:17.728837    4299 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:35:17.731149    4299 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:35:17.741408    4299 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:35:17.744988    4299 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 02:35:17.745007    4299 cni.go:84] Creating CNI manager for ""
	I0917 02:35:17.745030    4299 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 02:35:17.745068    4299 start.go:340] cluster config:
	{Name:kubernetes-upgrade-685000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:35:17.748771    4299 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:35:17.755837    4299 out.go:177] * Starting "kubernetes-upgrade-685000" primary control-plane node in "kubernetes-upgrade-685000" cluster
	I0917 02:35:17.759672    4299 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 02:35:17.759694    4299 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 02:35:17.759703    4299 cache.go:56] Caching tarball of preloaded images
	I0917 02:35:17.759773    4299 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:35:17.759778    4299 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 02:35:17.759852    4299 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/kubernetes-upgrade-685000/config.json ...
	I0917 02:35:17.759863    4299 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/kubernetes-upgrade-685000/config.json: {Name:mk0b1eea8823391a6c574f78b8e7d45224ffc577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:35:17.760178    4299 start.go:360] acquireMachinesLock for kubernetes-upgrade-685000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:35:17.760216    4299 start.go:364] duration metric: took 31.25µs to acquireMachinesLock for "kubernetes-upgrade-685000"
	I0917 02:35:17.760228    4299 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:35:17.760256    4299 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:35:17.763831    4299 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:35:17.781891    4299 start.go:159] libmachine.API.Create for "kubernetes-upgrade-685000" (driver="qemu2")
	I0917 02:35:17.781928    4299 client.go:168] LocalClient.Create starting
	I0917 02:35:17.781994    4299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:35:17.782032    4299 main.go:141] libmachine: Decoding PEM data...
	I0917 02:35:17.782042    4299 main.go:141] libmachine: Parsing certificate...
	I0917 02:35:17.782084    4299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:35:17.782112    4299 main.go:141] libmachine: Decoding PEM data...
	I0917 02:35:17.782120    4299 main.go:141] libmachine: Parsing certificate...
	I0917 02:35:17.782500    4299 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:35:17.945299    4299 main.go:141] libmachine: Creating SSH key...
	I0917 02:35:18.063073    4299 main.go:141] libmachine: Creating Disk image...
	I0917 02:35:18.063085    4299 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:35:18.063316    4299 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2
	I0917 02:35:18.073090    4299 main.go:141] libmachine: STDOUT: 
	I0917 02:35:18.073114    4299 main.go:141] libmachine: STDERR: 
	I0917 02:35:18.073184    4299 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2 +20000M
	I0917 02:35:18.081349    4299 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:35:18.081364    4299 main.go:141] libmachine: STDERR: 
	I0917 02:35:18.081401    4299 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2
	I0917 02:35:18.081410    4299 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:35:18.081420    4299 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:35:18.081452    4299 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:7a:03:a5:b9:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2
	I0917 02:35:18.083113    4299 main.go:141] libmachine: STDOUT: 
	I0917 02:35:18.083127    4299 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:35:18.083147    4299 client.go:171] duration metric: took 301.213834ms to LocalClient.Create
	I0917 02:35:20.085266    4299 start.go:128] duration metric: took 2.325008291s to createHost
	I0917 02:35:20.085291    4299 start.go:83] releasing machines lock for "kubernetes-upgrade-685000", held for 2.325080334s
	W0917 02:35:20.085343    4299 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:35:20.090173    4299 out.go:177] * Deleting "kubernetes-upgrade-685000" in qemu2 ...
	W0917 02:35:20.116855    4299 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:35:20.116864    4299 start.go:729] Will try again in 5 seconds ...
	I0917 02:35:25.119061    4299 start.go:360] acquireMachinesLock for kubernetes-upgrade-685000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:35:25.119529    4299 start.go:364] duration metric: took 385.375µs to acquireMachinesLock for "kubernetes-upgrade-685000"
	I0917 02:35:25.119669    4299 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:35:25.119865    4299 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:35:25.127136    4299 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:35:25.165775    4299 start.go:159] libmachine.API.Create for "kubernetes-upgrade-685000" (driver="qemu2")
	I0917 02:35:25.165827    4299 client.go:168] LocalClient.Create starting
	I0917 02:35:25.165944    4299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:35:25.166009    4299 main.go:141] libmachine: Decoding PEM data...
	I0917 02:35:25.166026    4299 main.go:141] libmachine: Parsing certificate...
	I0917 02:35:25.166085    4299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:35:25.166125    4299 main.go:141] libmachine: Decoding PEM data...
	I0917 02:35:25.166138    4299 main.go:141] libmachine: Parsing certificate...
	I0917 02:35:25.166673    4299 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:35:25.330762    4299 main.go:141] libmachine: Creating SSH key...
	I0917 02:35:25.454187    4299 main.go:141] libmachine: Creating Disk image...
	I0917 02:35:25.454194    4299 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:35:25.454403    4299 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2
	I0917 02:35:25.463818    4299 main.go:141] libmachine: STDOUT: 
	I0917 02:35:25.463909    4299 main.go:141] libmachine: STDERR: 
	I0917 02:35:25.463973    4299 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2 +20000M
	I0917 02:35:25.471992    4299 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:35:25.472013    4299 main.go:141] libmachine: STDERR: 
	I0917 02:35:25.472028    4299 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2
	I0917 02:35:25.472032    4299 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:35:25.472040    4299 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:35:25.472083    4299 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:50:3a:aa:75:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2
	I0917 02:35:25.473955    4299 main.go:141] libmachine: STDOUT: 
	I0917 02:35:25.474015    4299 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:35:25.474030    4299 client.go:171] duration metric: took 308.197875ms to LocalClient.Create
	I0917 02:35:27.476208    4299 start.go:128] duration metric: took 2.356325417s to createHost
	I0917 02:35:27.476264    4299 start.go:83] releasing machines lock for "kubernetes-upgrade-685000", held for 2.356700583s
	W0917 02:35:27.476653    4299 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-685000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-685000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:35:27.486227    4299 out.go:201] 
	W0917 02:35:27.489216    4299 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:35:27.489240    4299 out.go:270] * 
	* 
	W0917 02:35:27.490499    4299 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:35:27.500176    4299 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-685000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-685000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-685000: (3.303862542s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-685000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-685000 status --format={{.Host}}: exit status 7 (50.726583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-685000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-685000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.170267167s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-685000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-685000" primary control-plane node in "kubernetes-upgrade-685000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-685000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-685000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:35:30.895272    4335 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:35:30.895409    4335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:35:30.895413    4335 out.go:358] Setting ErrFile to fd 2...
	I0917 02:35:30.895415    4335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:35:30.895539    4335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:35:30.896527    4335 out.go:352] Setting JSON to false
	I0917 02:35:30.913105    4335 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3900,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:35:30.913173    4335 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:35:30.916922    4335 out.go:177] * [kubernetes-upgrade-685000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:35:30.925076    4335 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:35:30.925142    4335 notify.go:220] Checking for updates...
	I0917 02:35:30.931032    4335 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:35:30.933989    4335 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:35:30.936981    4335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:35:30.939986    4335 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:35:30.943017    4335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:35:30.946214    4335 config.go:182] Loaded profile config "kubernetes-upgrade-685000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0917 02:35:30.946448    4335 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:35:30.950958    4335 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:35:30.956908    4335 start.go:297] selected driver: qemu2
	I0917 02:35:30.956915    4335 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-685000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:35:30.956960    4335 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:35:30.959245    4335 cni.go:84] Creating CNI manager for ""
	I0917 02:35:30.959278    4335 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:35:30.959294    4335 start.go:340] cluster config:
	{Name:kubernetes-upgrade-685000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-685000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:35:30.962780    4335 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:35:30.969989    4335 out.go:177] * Starting "kubernetes-upgrade-685000" primary control-plane node in "kubernetes-upgrade-685000" cluster
	I0917 02:35:30.974055    4335 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:35:30.974073    4335 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:35:30.974078    4335 cache.go:56] Caching tarball of preloaded images
	I0917 02:35:30.974144    4335 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:35:30.974149    4335 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:35:30.974212    4335 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/kubernetes-upgrade-685000/config.json ...
	I0917 02:35:30.974558    4335 start.go:360] acquireMachinesLock for kubernetes-upgrade-685000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:35:30.974585    4335 start.go:364] duration metric: took 21.792µs to acquireMachinesLock for "kubernetes-upgrade-685000"
	I0917 02:35:30.974593    4335 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:35:30.974598    4335 fix.go:54] fixHost starting: 
	I0917 02:35:30.974701    4335 fix.go:112] recreateIfNeeded on kubernetes-upgrade-685000: state=Stopped err=<nil>
	W0917 02:35:30.974709    4335 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:35:30.979063    4335 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-685000" ...
	I0917 02:35:30.986980    4335 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:35:30.987015    4335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:50:3a:aa:75:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2
	I0917 02:35:30.988824    4335 main.go:141] libmachine: STDOUT: 
	I0917 02:35:30.988840    4335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:35:30.988867    4335 fix.go:56] duration metric: took 14.269042ms for fixHost
	I0917 02:35:30.988870    4335 start.go:83] releasing machines lock for "kubernetes-upgrade-685000", held for 14.281583ms
	W0917 02:35:30.988875    4335 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:35:30.988910    4335 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:35:30.988914    4335 start.go:729] Will try again in 5 seconds ...
	I0917 02:35:35.991027    4335 start.go:360] acquireMachinesLock for kubernetes-upgrade-685000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:35:35.991168    4335 start.go:364] duration metric: took 110.458µs to acquireMachinesLock for "kubernetes-upgrade-685000"
	I0917 02:35:35.991186    4335 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:35:35.991191    4335 fix.go:54] fixHost starting: 
	I0917 02:35:35.991358    4335 fix.go:112] recreateIfNeeded on kubernetes-upgrade-685000: state=Stopped err=<nil>
	W0917 02:35:35.991364    4335 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:35:35.998484    4335 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-685000" ...
	I0917 02:35:36.002489    4335 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:35:36.002544    4335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:50:3a:aa:75:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubernetes-upgrade-685000/disk.qcow2
	I0917 02:35:36.004759    4335 main.go:141] libmachine: STDOUT: 
	I0917 02:35:36.004774    4335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:35:36.004795    4335 fix.go:56] duration metric: took 13.604542ms for fixHost
	I0917 02:35:36.004799    4335 start.go:83] releasing machines lock for "kubernetes-upgrade-685000", held for 13.626917ms
	W0917 02:35:36.004835    4335 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-685000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-685000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:35:36.013430    4335 out.go:201] 
	W0917 02:35:36.016491    4335 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:35:36.016497    4335 out.go:270] * 
	* 
	W0917 02:35:36.017007    4335 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:35:36.027401    4335 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-685000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-685000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-685000 version --output=json: exit status 1 (29.043792ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-685000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-17 02:35:36.066147 -0700 PDT m=+3491.352159793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-685000 -n kubernetes-upgrade-685000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-685000 -n kubernetes-upgrade-685000: exit status 7 (30.252083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-685000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-685000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-685000
--- FAIL: TestKubernetesUpgrade (18.54s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.2s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19648
- KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current956562694/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.20s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.65s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19648
- KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4137475283/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (583.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3320257531 start -p stopped-upgrade-288000 --memory=2200 --vm-driver=qemu2 
E0917 02:36:18.063589    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3320257531 start -p stopped-upgrade-288000 --memory=2200 --vm-driver=qemu2 : (49.638227125s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3320257531 -p stopped-upgrade-288000 stop
E0917 02:36:36.417025    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3320257531 -p stopped-upgrade-288000 stop: (12.092983s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-288000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0917 02:41:18.053860    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:41:36.408026    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-288000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.616559917s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-288000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-288000" primary control-plane node in "stopped-upgrade-288000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-288000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:36:39.285186    4370 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:36:39.285339    4370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:36:39.285342    4370 out.go:358] Setting ErrFile to fd 2...
	I0917 02:36:39.285345    4370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:36:39.285464    4370 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:36:39.286509    4370 out.go:352] Setting JSON to false
	I0917 02:36:39.303429    4370 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3969,"bootTime":1726561830,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:36:39.303492    4370 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:36:39.308687    4370 out.go:177] * [stopped-upgrade-288000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:36:39.316758    4370 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:36:39.316815    4370 notify.go:220] Checking for updates...
	I0917 02:36:39.325609    4370 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:36:39.328603    4370 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:36:39.331618    4370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:36:39.334678    4370 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:36:39.335931    4370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:36:39.338928    4370 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:36:39.342649    4370 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 02:36:39.345647    4370 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:36:39.349587    4370 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:36:39.356646    4370 start.go:297] selected driver: qemu2
	I0917 02:36:39.356651    4370 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 02:36:39.356694    4370 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:36:39.359061    4370 cni.go:84] Creating CNI manager for ""
	I0917 02:36:39.359089    4370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:36:39.359109    4370 start.go:340] cluster config:
	{Name:stopped-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 02:36:39.359156    4370 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:36:39.367659    4370 out.go:177] * Starting "stopped-upgrade-288000" primary control-plane node in "stopped-upgrade-288000" cluster
	I0917 02:36:39.371643    4370 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 02:36:39.371659    4370 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0917 02:36:39.371666    4370 cache.go:56] Caching tarball of preloaded images
	I0917 02:36:39.371729    4370 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:36:39.371736    4370 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0917 02:36:39.371792    4370 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/config.json ...
	I0917 02:36:39.372239    4370 start.go:360] acquireMachinesLock for stopped-upgrade-288000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:36:39.372272    4370 start.go:364] duration metric: took 27.833µs to acquireMachinesLock for "stopped-upgrade-288000"
	I0917 02:36:39.372280    4370 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:36:39.372286    4370 fix.go:54] fixHost starting: 
	I0917 02:36:39.372389    4370 fix.go:112] recreateIfNeeded on stopped-upgrade-288000: state=Stopped err=<nil>
	W0917 02:36:39.372398    4370 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:36:39.380567    4370 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-288000" ...
	I0917 02:36:39.384644    4370 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:36:39.384718    4370 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50472-:22,hostfwd=tcp::50473-:2376,hostname=stopped-upgrade-288000 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/disk.qcow2
	I0917 02:36:39.430165    4370 main.go:141] libmachine: STDOUT: 
	I0917 02:36:39.430187    4370 main.go:141] libmachine: STDERR: 
	I0917 02:36:39.430195    4370 main.go:141] libmachine: Waiting for VM to start (ssh -p 50472 docker@127.0.0.1)...
	I0917 02:36:59.808580    4370 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/config.json ...
	I0917 02:36:59.809578    4370 machine.go:93] provisionDockerMachine start ...
	I0917 02:36:59.809757    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:36:59.810192    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:36:59.810209    4370 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 02:36:59.884227    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 02:36:59.884252    4370 buildroot.go:166] provisioning hostname "stopped-upgrade-288000"
	I0917 02:36:59.884380    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:36:59.884609    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:36:59.884621    4370 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-288000 && echo "stopped-upgrade-288000" | sudo tee /etc/hostname
	I0917 02:36:59.956718    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-288000
	
	I0917 02:36:59.956776    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:36:59.956911    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:36:59.956924    4370 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-288000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-288000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-288000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 02:37:00.018325    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 02:37:00.018337    4370 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1056/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1056/.minikube}
	I0917 02:37:00.018346    4370 buildroot.go:174] setting up certificates
	I0917 02:37:00.018352    4370 provision.go:84] configureAuth start
	I0917 02:37:00.018356    4370 provision.go:143] copyHostCerts
	I0917 02:37:00.018446    4370 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1056/.minikube/key.pem, removing ...
	I0917 02:37:00.018454    4370 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1056/.minikube/key.pem
	I0917 02:37:00.018573    4370 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/key.pem (1675 bytes)
	I0917 02:37:00.018753    4370 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.pem, removing ...
	I0917 02:37:00.018758    4370 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.pem
	I0917 02:37:00.018814    4370 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.pem (1082 bytes)
	I0917 02:37:00.018934    4370 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1056/.minikube/cert.pem, removing ...
	I0917 02:37:00.018939    4370 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1056/.minikube/cert.pem
	I0917 02:37:00.018989    4370 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1056/.minikube/cert.pem (1123 bytes)
	I0917 02:37:00.019101    4370 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-288000 san=[127.0.0.1 localhost minikube stopped-upgrade-288000]
	I0917 02:37:00.056391    4370 provision.go:177] copyRemoteCerts
	I0917 02:37:00.056423    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 02:37:00.056430    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	I0917 02:37:00.089075    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 02:37:00.095977    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 02:37:00.102728    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 02:37:00.110134    4370 provision.go:87] duration metric: took 91.774167ms to configureAuth
	I0917 02:37:00.110143    4370 buildroot.go:189] setting minikube options for container-runtime
	I0917 02:37:00.110241    4370 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:37:00.110286    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:37:00.110376    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:37:00.110381    4370 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 02:37:00.167644    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0917 02:37:00.167652    4370 buildroot.go:70] root file system type: tmpfs
	I0917 02:37:00.167702    4370 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 02:37:00.167753    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:37:00.167872    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:37:00.167911    4370 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 02:37:00.232229    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 02:37:00.232291    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:37:00.232424    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:37:00.232435    4370 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 02:37:00.597237    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0917 02:37:00.597253    4370 machine.go:96] duration metric: took 787.66625ms to provisionDockerMachine
	I0917 02:37:00.597266    4370 start.go:293] postStartSetup for "stopped-upgrade-288000" (driver="qemu2")
	I0917 02:37:00.597272    4370 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 02:37:00.597339    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 02:37:00.597353    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	I0917 02:37:00.627001    4370 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 02:37:00.628279    4370 info.go:137] Remote host: Buildroot 2021.02.12
	I0917 02:37:00.628286    4370 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1056/.minikube/addons for local assets ...
	I0917 02:37:00.628388    4370 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1056/.minikube/files for local assets ...
	I0917 02:37:00.628518    4370 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem -> 15552.pem in /etc/ssl/certs
	I0917 02:37:00.628649    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 02:37:00.631068    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem --> /etc/ssl/certs/15552.pem (1708 bytes)
	I0917 02:37:00.638133    4370 start.go:296] duration metric: took 40.862875ms for postStartSetup
	I0917 02:37:00.638145    4370 fix.go:56] duration metric: took 21.265963708s for fixHost
	I0917 02:37:00.638178    4370 main.go:141] libmachine: Using SSH client type: native
	I0917 02:37:00.638276    4370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104dbd190] 0x104dbf9d0 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0917 02:37:00.638281    4370 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 02:37:00.692849    4370 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726565820.450382254
	
	I0917 02:37:00.692857    4370 fix.go:216] guest clock: 1726565820.450382254
	I0917 02:37:00.692861    4370 fix.go:229] Guest: 2024-09-17 02:37:00.450382254 -0700 PDT Remote: 2024-09-17 02:37:00.638147 -0700 PDT m=+21.372789251 (delta=-187.764746ms)
	I0917 02:37:00.692872    4370 fix.go:200] guest clock delta is within tolerance: -187.764746ms
	I0917 02:37:00.692875    4370 start.go:83] releasing machines lock for "stopped-upgrade-288000", held for 21.320700042s
	I0917 02:37:00.692944    4370 ssh_runner.go:195] Run: cat /version.json
	I0917 02:37:00.692957    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	I0917 02:37:00.692945    4370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 02:37:00.693025    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	W0917 02:37:00.693545    4370 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50472: connect: connection refused
	I0917 02:37:00.693567    4370 retry.go:31] will retry after 217.257254ms: dial tcp [::1]:50472: connect: connection refused
	W0917 02:37:00.720552    4370 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0917 02:37:00.720603    4370 ssh_runner.go:195] Run: systemctl --version
	I0917 02:37:00.722288    4370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 02:37:00.723926    4370 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 02:37:00.723956    4370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0917 02:37:00.726744    4370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0917 02:37:00.731362    4370 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 02:37:00.731372    4370 start.go:495] detecting cgroup driver to use...
	I0917 02:37:00.731448    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:37:00.738418    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0917 02:37:00.741987    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 02:37:00.745235    4370 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 02:37:00.745264    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 02:37:00.748205    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:37:00.751035    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 02:37:00.754319    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 02:37:00.757659    4370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 02:37:00.760764    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 02:37:00.763607    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 02:37:00.766695    4370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 02:37:00.770079    4370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 02:37:00.773070    4370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 02:37:00.775603    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:00.855135    4370 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 02:37:00.861492    4370 start.go:495] detecting cgroup driver to use...
	I0917 02:37:00.861547    4370 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 02:37:00.866663    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:37:00.871563    4370 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 02:37:00.880953    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 02:37:00.885538    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:37:00.890293    4370 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 02:37:00.952701    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 02:37:00.982586    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 02:37:00.988510    4370 ssh_runner.go:195] Run: which cri-dockerd
	I0917 02:37:00.990046    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 02:37:00.995360    4370 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0917 02:37:01.002424    4370 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 02:37:01.065582    4370 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 02:37:01.143698    4370 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 02:37:01.143762    4370 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 02:37:01.148689    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:01.226782    4370 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:37:02.390174    4370 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163378792s)
	I0917 02:37:02.390244    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 02:37:02.394655    4370 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 02:37:02.401255    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:37:02.405659    4370 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 02:37:02.486784    4370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 02:37:02.548590    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:02.610895    4370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 02:37:02.617570    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 02:37:02.622569    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:02.689333    4370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 02:37:02.730032    4370 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 02:37:02.730128    4370 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 02:37:02.732579    4370 start.go:563] Will wait 60s for crictl version
	I0917 02:37:02.732652    4370 ssh_runner.go:195] Run: which crictl
	I0917 02:37:02.734316    4370 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 02:37:02.750408    4370 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0917 02:37:02.750488    4370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:37:02.768013    4370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 02:37:02.789828    4370 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0917 02:37:02.789925    4370 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0917 02:37:02.791671    4370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:37:02.795716    4370 kubeadm.go:883] updating cluster {Name:stopped-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0917 02:37:02.795767    4370 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0917 02:37:02.795830    4370 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:37:02.810393    4370 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 02:37:02.810402    4370 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 02:37:02.810461    4370 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 02:37:02.814331    4370 ssh_runner.go:195] Run: which lz4
	I0917 02:37:02.816234    4370 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 02:37:02.817766    4370 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 02:37:02.817791    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0917 02:37:03.757489    4370 docker.go:649] duration metric: took 941.327208ms to copy over tarball
	I0917 02:37:03.757575    4370 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 02:37:04.905177    4370 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.147589125s)
	I0917 02:37:04.905190    4370 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 02:37:04.920994    4370 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0917 02:37:04.924088    4370 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0917 02:37:04.929011    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:04.990282    4370 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 02:37:06.375893    4370 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.385602208s)
	I0917 02:37:06.375999    4370 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 02:37:06.389038    4370 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 02:37:06.389048    4370 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0917 02:37:06.389053    4370 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0917 02:37:06.393875    4370 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:06.396603    4370 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:37:06.398832    4370 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:06.398974    4370 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:37:06.401316    4370 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:37:06.401341    4370 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:37:06.402622    4370 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:37:06.402558    4370 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:37:06.404538    4370 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:37:06.404539    4370 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0917 02:37:06.405962    4370 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0917 02:37:06.405994    4370 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:37:06.407264    4370 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0917 02:37:06.407293    4370 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:37:06.408063    4370 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0917 02:37:06.408978    4370 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:37:06.838100    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:37:06.839133    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:37:06.848715    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:37:06.849786    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:37:06.852323    4370 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0917 02:37:06.852338    4370 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0917 02:37:06.852346    4370 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:37:06.852347    4370 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:37:06.852392    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0917 02:37:06.852452    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0917 02:37:06.864378    4370 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0917 02:37:06.864401    4370 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:37:06.864466    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0917 02:37:06.872000    4370 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0917 02:37:06.872024    4370 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:37:06.872087    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0917 02:37:06.877755    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0917 02:37:06.879391    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0917 02:37:06.881610    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0917 02:37:06.881678    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0917 02:37:06.881681    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0917 02:37:06.892068    4370 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0917 02:37:06.892211    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:37:06.897817    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0917 02:37:06.900509    4370 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0917 02:37:06.900527    4370 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0917 02:37:06.900544    4370 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0917 02:37:06.900553    4370 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0917 02:37:06.900584    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0917 02:37:06.900592    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0917 02:37:06.906760    4370 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0917 02:37:06.906777    4370 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:37:06.906833    4370 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0917 02:37:06.919277    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0917 02:37:06.919340    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0917 02:37:06.919412    4370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0917 02:37:06.919413    4370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0917 02:37:06.923870    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0917 02:37:06.923891    4370 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0917 02:37:06.923901    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0917 02:37:06.923969    4370 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0917 02:37:06.923980    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0917 02:37:06.923990    4370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0917 02:37:06.932371    4370 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0917 02:37:06.932395    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0917 02:37:06.943657    4370 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0917 02:37:06.943671    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0917 02:37:07.016843    4370 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0917 02:37:07.038458    4370 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0917 02:37:07.038474    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0917 02:37:07.143170    4370 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0917 02:37:07.250602    4370 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0917 02:37:07.250629    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0917 02:37:07.277309    4370 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0917 02:37:07.277442    4370 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:07.400874    4370 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0917 02:37:07.400912    4370 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0917 02:37:07.400937    4370 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:07.401013    4370 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:37:07.414248    4370 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0917 02:37:07.414380    4370 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0917 02:37:07.415749    4370 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0917 02:37:07.415763    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0917 02:37:07.446275    4370 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0917 02:37:07.446289    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0917 02:37:07.690313    4370 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0917 02:37:07.690356    4370 cache_images.go:92] duration metric: took 1.301302791s to LoadCachedImages
	W0917 02:37:07.690389    4370 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0917 02:37:07.690399    4370 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0917 02:37:07.690452    4370 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-288000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 02:37:07.690531    4370 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 02:37:07.704062    4370 cni.go:84] Creating CNI manager for ""
	I0917 02:37:07.704080    4370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:37:07.704087    4370 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 02:37:07.704099    4370 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-288000 NodeName:stopped-upgrade-288000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 02:37:07.704159    4370 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-288000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 02:37:07.704220    4370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0917 02:37:07.706884    4370 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 02:37:07.706914    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 02:37:07.709911    4370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0917 02:37:07.714702    4370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 02:37:07.719282    4370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0917 02:37:07.724615    4370 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0917 02:37:07.725761    4370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 02:37:07.729617    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:37:07.810834    4370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:37:07.821004    4370 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000 for IP: 10.0.2.15
	I0917 02:37:07.821015    4370 certs.go:194] generating shared ca certs ...
	I0917 02:37:07.821024    4370 certs.go:226] acquiring lock for ca certs: {Name:mkff5fc329c6145be4c1381e1b58175b65aa8cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:07.821195    4370 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.key
	I0917 02:37:07.821273    4370 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.key
	I0917 02:37:07.821280    4370 certs.go:256] generating profile certs ...
	I0917 02:37:07.821356    4370 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/client.key
	I0917 02:37:07.821375    4370 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key.a0c8013c
	I0917 02:37:07.821384    4370 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt.a0c8013c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0917 02:37:07.896905    4370 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt.a0c8013c ...
	I0917 02:37:07.896922    4370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt.a0c8013c: {Name:mk7a15f968916d0ad32e297bea40826c255d208a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:07.897212    4370 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key.a0c8013c ...
	I0917 02:37:07.897216    4370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key.a0c8013c: {Name:mk7883df2a29dfa3e4e916f1dc22deae5b84d83d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:07.897366    4370 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt.a0c8013c -> /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt
	I0917 02:37:07.897498    4370 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key.a0c8013c -> /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key
	I0917 02:37:07.897649    4370 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/proxy-client.key
	I0917 02:37:07.897780    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/1555.pem (1338 bytes)
	W0917 02:37:07.897813    4370 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/1555_empty.pem, impossibly tiny 0 bytes
	I0917 02:37:07.897819    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 02:37:07.897844    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem (1082 bytes)
	I0917 02:37:07.897865    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem (1123 bytes)
	I0917 02:37:07.897883    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/key.pem (1675 bytes)
	I0917 02:37:07.897925    4370 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem (1708 bytes)
	I0917 02:37:07.898291    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 02:37:07.905549    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 02:37:07.911922    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 02:37:07.918651    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 02:37:07.925922    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0917 02:37:07.933212    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 02:37:07.940050    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 02:37:07.946589    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 02:37:07.953942    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/1555.pem --> /usr/share/ca-certificates/1555.pem (1338 bytes)
	I0917 02:37:07.960366    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/ssl/certs/15552.pem --> /usr/share/ca-certificates/15552.pem (1708 bytes)
	I0917 02:37:07.966729    4370 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 02:37:07.973673    4370 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 02:37:07.978998    4370 ssh_runner.go:195] Run: openssl version
	I0917 02:37:07.980955    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15552.pem && ln -fs /usr/share/ca-certificates/15552.pem /etc/ssl/certs/15552.pem"
	I0917 02:37:07.983980    4370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15552.pem
	I0917 02:37:07.985285    4370 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 08:53 /usr/share/ca-certificates/15552.pem
	I0917 02:37:07.985309    4370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15552.pem
	I0917 02:37:07.986958    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15552.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 02:37:07.990225    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 02:37:07.993449    4370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:37:07.995071    4370 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 08:38 /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:37:07.995090    4370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 02:37:07.996948    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 02:37:07.999620    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1555.pem && ln -fs /usr/share/ca-certificates/1555.pem /etc/ssl/certs/1555.pem"
	I0917 02:37:08.002788    4370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1555.pem
	I0917 02:37:08.004248    4370 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 08:53 /usr/share/ca-certificates/1555.pem
	I0917 02:37:08.004270    4370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1555.pem
	I0917 02:37:08.005892    4370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1555.pem /etc/ssl/certs/51391683.0"
	I0917 02:37:08.009048    4370 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 02:37:08.010369    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 02:37:08.012283    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 02:37:08.014051    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 02:37:08.016055    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 02:37:08.017852    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 02:37:08.019679    4370 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 02:37:08.021421    4370 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50506 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-288000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0917 02:37:08.021502    4370 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:37:08.031427    4370 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 02:37:08.034815    4370 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 02:37:08.034826    4370 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 02:37:08.034853    4370 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 02:37:08.038511    4370 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 02:37:08.038800    4370 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-288000" does not appear in /Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:37:08.038895    4370 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1056/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-288000" cluster setting kubeconfig missing "stopped-upgrade-288000" context setting]
	I0917 02:37:08.039073    4370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/kubeconfig: {Name:mkb79e559d17024b096623143f764244ebf5b237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:37:08.039507    4370 kapi.go:59] client config for stopped-upgrade-288000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106395800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:37:08.039824    4370 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 02:37:08.042795    4370 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-288000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0917 02:37:08.042801    4370 kubeadm.go:1160] stopping kube-system containers ...
	I0917 02:37:08.042853    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 02:37:08.053693    4370 docker.go:483] Stopping containers: [5d12a44bd79e 7b4b71b6f19a d7b6ff64cafe b1296b57ee41 80dbf74e70dd 637480f75136 b459245dcdb4 7d82f00a9f22 2bd07895721d]
	I0917 02:37:08.053778    4370 ssh_runner.go:195] Run: docker stop 5d12a44bd79e 7b4b71b6f19a d7b6ff64cafe b1296b57ee41 80dbf74e70dd 637480f75136 b459245dcdb4 7d82f00a9f22 2bd07895721d
	I0917 02:37:08.064500    4370 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 02:37:08.069849    4370 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 02:37:08.072968    4370 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 02:37:08.072979    4370 kubeadm.go:157] found existing configuration files:
	
	I0917 02:37:08.073005    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0917 02:37:08.075421    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 02:37:08.075452    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 02:37:08.078215    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0917 02:37:08.081291    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 02:37:08.081314    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 02:37:08.083958    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0917 02:37:08.086595    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 02:37:08.086629    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 02:37:08.089888    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0917 02:37:08.092607    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 02:37:08.092631    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 02:37:08.095107    4370 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 02:37:08.098239    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:37:08.120898    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:37:08.574813    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:37:08.715064    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:37:08.735686    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 02:37:08.762550    4370 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:37:08.762655    4370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:37:09.264844    4370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:37:09.763724    4370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:37:09.768163    4370 api_server.go:72] duration metric: took 1.005618792s to wait for apiserver process to appear ...
	I0917 02:37:09.768172    4370 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:37:09.768187    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:14.770272    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:14.770350    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:19.770626    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:19.770692    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:24.771186    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:24.771245    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:29.771934    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:29.771981    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:34.772335    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:34.772355    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:39.773127    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:39.773188    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:44.773488    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:44.773510    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:49.774316    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:49.774336    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:54.775390    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:54.775471    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:37:59.775752    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:37:59.775782    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:04.776575    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:04.776614    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:09.777845    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:09.777948    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:09.789459    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:09.789546    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:09.800188    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:09.800281    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:09.810697    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:09.810764    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:09.820945    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:09.821026    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:09.831690    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:09.831785    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:09.842323    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:09.842396    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:09.852578    4370 logs.go:276] 0 containers: []
	W0917 02:38:09.852590    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:09.852653    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:09.862907    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:09.862927    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:09.862932    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:09.876843    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:09.876857    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:09.890816    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:09.890827    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:09.909822    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:09.909832    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:09.914066    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:09.914076    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:09.924775    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:09.924786    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:09.936512    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:09.936522    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:09.958677    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:09.958690    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:09.976911    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:09.976922    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:09.988274    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:09.988284    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:10.064951    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:10.064962    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:10.089625    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:10.089636    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:10.101742    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:10.101754    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:10.139589    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:10.139608    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:10.181334    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:10.181344    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:10.196072    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:10.196083    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:10.207824    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:10.207837    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:12.720562    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:17.722213    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:17.722492    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:17.742352    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:17.742485    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:17.756907    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:17.757003    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:17.768800    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:17.768891    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:17.780531    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:17.780616    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:17.790780    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:17.790859    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:17.805596    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:17.805678    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:17.816348    4370 logs.go:276] 0 containers: []
	W0917 02:38:17.816359    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:17.816434    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:17.831714    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:17.831729    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:17.831734    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:17.848430    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:17.848440    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:17.852692    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:17.852699    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:17.866879    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:17.866893    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:17.906296    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:17.906308    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:17.919406    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:17.919417    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:17.937641    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:17.937655    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:17.949563    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:17.949574    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:17.964341    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:17.964355    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:17.975598    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:17.975611    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:17.988088    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:17.988101    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:18.025698    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:18.025712    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:18.063356    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:18.063369    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:18.088250    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:18.088266    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:18.113996    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:18.114003    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:18.135115    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:18.135126    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:18.146632    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:18.146644    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:20.659410    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:25.659740    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:25.660032    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:25.679922    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:25.680063    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:25.695858    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:25.695947    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:25.710504    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:25.710602    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:25.726035    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:25.726123    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:25.743266    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:25.743351    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:25.754147    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:25.754235    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:25.769103    4370 logs.go:276] 0 containers: []
	W0917 02:38:25.769118    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:25.769201    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:25.779934    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:25.779952    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:25.779958    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:25.793748    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:25.793762    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:25.806719    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:25.806735    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:25.818285    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:25.818298    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:25.860098    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:25.860114    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:25.899363    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:25.899376    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:25.924935    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:25.924942    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:25.940131    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:25.940143    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:25.958917    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:25.958931    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:25.980345    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:25.980354    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:25.995835    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:25.995850    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:26.010476    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:26.010492    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:26.021823    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:26.021834    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:26.033474    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:26.033484    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:26.046892    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:26.046903    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:26.059106    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:26.059116    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:26.102361    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:26.102372    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:28.609693    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:33.612048    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:33.612303    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:33.629109    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:33.629199    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:33.641012    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:33.641100    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:33.660107    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:33.660181    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:33.670609    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:33.670697    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:33.681218    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:33.681294    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:33.694017    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:33.694090    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:33.704609    4370 logs.go:276] 0 containers: []
	W0917 02:38:33.704622    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:33.704685    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:33.719692    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:33.719711    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:33.719717    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:33.733603    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:33.733615    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:33.744996    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:33.745006    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:33.770181    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:33.770194    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:33.807146    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:33.807156    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:33.818552    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:33.818565    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:33.833516    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:33.833527    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:33.854305    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:33.854319    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:33.866080    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:33.866090    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:33.878414    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:33.878429    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:33.915404    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:33.915416    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:33.929260    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:33.929272    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:33.967433    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:33.967448    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:33.982029    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:33.982039    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:33.986674    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:33.986682    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:34.000140    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:34.000152    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:34.017031    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:34.017041    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:36.534369    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:41.536771    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:41.537028    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:41.559782    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:41.559904    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:41.582471    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:41.582571    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:41.594548    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:41.594637    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:41.605259    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:41.605346    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:41.615429    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:41.615512    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:41.626045    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:41.626127    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:41.635839    4370 logs.go:276] 0 containers: []
	W0917 02:38:41.635851    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:41.635928    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:41.650783    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:41.650800    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:41.650806    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:41.661633    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:41.661644    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:41.673663    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:41.673674    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:41.690741    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:41.690750    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:41.701901    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:41.701915    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:41.726963    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:41.726971    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:41.738662    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:41.738675    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:41.742730    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:41.742737    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:41.780890    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:41.780904    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:41.795478    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:41.795491    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:41.807206    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:41.807238    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:41.821710    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:41.821723    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:41.860797    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:41.860809    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:41.898349    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:41.898360    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:41.912088    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:41.912098    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:41.933283    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:41.933297    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:41.948149    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:41.948165    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:44.462277    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:49.463316    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:49.463550    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:49.480884    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:49.480993    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:49.494219    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:49.494317    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:49.506052    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:49.506143    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:49.516567    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:49.516652    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:49.533127    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:49.533210    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:49.546037    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:49.546126    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:49.556042    4370 logs.go:276] 0 containers: []
	W0917 02:38:49.556054    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:49.556119    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:49.566713    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:49.566734    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:49.566740    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:49.601409    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:49.601420    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:49.613453    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:49.613466    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:49.627429    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:49.627441    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:49.642383    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:49.642396    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:49.666249    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:49.666259    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:49.679602    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:49.679616    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:49.701140    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:49.701154    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:49.712511    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:49.712522    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:49.726656    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:49.726669    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:49.741085    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:49.741095    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:49.752504    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:49.752518    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:49.769722    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:49.769735    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:38:49.781434    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:49.781448    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:49.818590    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:49.818600    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:49.823502    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:49.823519    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:49.862352    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:49.862369    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:52.378975    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:38:57.381360    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:38:57.381569    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:38:57.394344    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:38:57.394436    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:38:57.406139    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:38:57.406224    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:38:57.416881    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:38:57.416995    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:38:57.430582    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:38:57.430668    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:38:57.440973    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:38:57.441057    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:38:57.451455    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:38:57.451535    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:38:57.461502    4370 logs.go:276] 0 containers: []
	W0917 02:38:57.461513    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:38:57.461581    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:38:57.472158    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:38:57.472179    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:38:57.472185    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:38:57.509548    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:38:57.509555    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:38:57.550227    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:38:57.550245    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:38:57.563798    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:38:57.563808    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:38:57.574831    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:38:57.574841    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:38:57.595724    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:38:57.595740    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:38:57.613889    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:38:57.613899    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:38:57.628803    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:38:57.628814    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:38:57.639802    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:38:57.639813    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:38:57.657533    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:38:57.657548    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:38:57.695308    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:38:57.695319    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:38:57.707006    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:38:57.707016    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:38:57.718764    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:38:57.718777    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:38:57.723740    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:38:57.723750    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:38:57.744970    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:38:57.744981    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:38:57.757190    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:38:57.757202    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:38:57.781544    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:38:57.781556    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:00.296593    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:05.299094    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:05.299345    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:05.318267    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:05.318384    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:05.333887    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:05.333981    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:05.345712    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:05.345793    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:05.356522    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:05.356598    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:05.366735    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:05.366800    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:05.377570    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:05.377657    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:05.390001    4370 logs.go:276] 0 containers: []
	W0917 02:39:05.390014    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:05.390090    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:05.402132    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:05.402151    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:05.402157    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:05.414460    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:05.414471    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:05.451940    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:05.451953    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:05.468008    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:05.468022    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:05.487207    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:05.487216    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:05.498187    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:05.498197    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:05.535702    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:05.535712    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:05.571494    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:05.571506    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:05.587346    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:05.587362    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:05.599674    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:05.599685    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:05.622145    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:05.622163    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:05.635536    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:05.635548    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:05.651539    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:05.651550    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:05.678041    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:05.678055    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:05.682811    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:05.682817    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:05.698520    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:05.698528    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:05.713327    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:05.713342    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:08.226799    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:13.229126    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:13.229322    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:13.242866    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:13.242960    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:13.254052    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:13.254148    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:13.264368    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:13.264442    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:13.275012    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:13.275099    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:13.285586    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:13.285659    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:13.297285    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:13.297377    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:13.306915    4370 logs.go:276] 0 containers: []
	W0917 02:39:13.306926    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:13.307003    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:13.317394    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:13.317413    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:13.317418    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:13.321608    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:13.321618    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:13.337279    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:13.337288    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:13.355008    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:13.355018    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:13.368260    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:13.368270    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:13.379975    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:13.379986    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:13.419731    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:13.419747    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:13.434487    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:13.434500    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:13.450372    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:13.450385    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:13.462879    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:13.462892    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:13.475634    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:13.475647    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:13.494730    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:13.494746    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:13.521449    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:13.521471    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:13.560048    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:13.560059    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:13.599059    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:13.599080    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:13.615308    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:13.615317    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:13.627960    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:13.627975    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:16.153941    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:21.156234    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:21.156585    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:21.177778    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:21.177901    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:21.193179    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:21.193271    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:21.205267    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:21.205359    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:21.216241    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:21.216287    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:21.227950    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:21.227994    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:21.239499    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:21.239550    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:21.250260    4370 logs.go:276] 0 containers: []
	W0917 02:39:21.250272    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:21.250349    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:21.265164    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:21.265183    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:21.265189    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:21.301991    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:21.302003    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:21.316101    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:21.316112    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:21.338236    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:21.338261    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:21.378872    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:21.378905    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:21.383784    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:21.383792    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:21.396288    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:21.396300    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:21.420761    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:21.420777    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:21.440131    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:21.440142    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:21.452617    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:21.452630    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:21.466304    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:21.466315    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:21.520119    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:21.520139    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:21.535719    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:21.535732    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:21.552627    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:21.552639    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:21.574398    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:21.574411    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:21.588594    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:21.588608    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:21.603336    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:21.603349    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:24.120159    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:29.122456    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:29.122550    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:29.133802    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:29.133879    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:29.145217    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:29.145294    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:29.156780    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:29.156861    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:29.168128    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:29.168216    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:29.179169    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:29.179253    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:29.190476    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:29.190557    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:29.201121    4370 logs.go:276] 0 containers: []
	W0917 02:39:29.201133    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:29.201211    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:29.212250    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:29.212266    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:29.212272    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:29.217067    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:29.217076    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:29.253524    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:29.253534    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:29.266227    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:29.266238    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:29.281771    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:29.281780    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:29.294920    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:29.294928    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:29.310036    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:29.310045    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:29.330937    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:29.330950    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:29.371126    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:29.371137    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:29.383377    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:29.383390    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:29.395701    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:29.395714    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:29.417598    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:29.417613    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:29.429725    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:29.429741    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:29.452650    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:29.452659    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:29.464762    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:29.464772    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:29.504417    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:29.504425    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:29.518477    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:29.518505    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:32.036694    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:37.038965    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:37.039065    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:37.051570    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:37.051656    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:37.063881    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:37.063968    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:37.077317    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:37.077407    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:37.090244    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:37.090337    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:37.101379    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:37.101460    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:37.112926    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:37.113014    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:37.124038    4370 logs.go:276] 0 containers: []
	W0917 02:39:37.124050    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:37.124128    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:37.135923    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:37.135942    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:37.135948    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:37.149276    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:37.149290    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:37.161306    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:37.161320    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:37.185827    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:37.185844    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:37.201175    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:37.201184    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:37.213652    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:37.213663    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:37.225621    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:37.225633    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:37.248807    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:37.248819    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:37.260340    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:37.260351    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:37.297553    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:37.297561    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:37.334600    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:37.334612    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:37.348864    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:37.348873    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:37.366405    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:37.366416    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:37.370426    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:37.370435    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:37.384041    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:37.384050    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:37.399330    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:37.399342    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:37.411859    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:37.411871    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:39.946811    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:44.948485    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:44.948587    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:44.960218    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:44.960309    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:44.971623    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:44.971710    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:44.982852    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:44.982939    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:44.993167    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:44.993255    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:45.004514    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:45.004595    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:45.016006    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:45.016095    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:45.027555    4370 logs.go:276] 0 containers: []
	W0917 02:39:45.027568    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:45.027650    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:45.038796    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:45.038813    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:45.038818    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:45.053640    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:45.053652    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:45.066289    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:45.066304    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:45.078706    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:45.078718    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:45.095300    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:45.095315    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:45.110071    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:45.110084    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:45.121649    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:45.121665    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:45.161496    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:45.161506    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:45.194965    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:45.194980    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:45.209952    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:45.209968    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:45.213991    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:45.214000    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:45.251232    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:45.251249    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:45.264699    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:45.264712    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:45.275996    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:45.276009    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:45.300367    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:45.300374    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:45.312265    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:45.312280    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:45.323954    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:45.323964    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:47.849278    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:39:52.851512    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:39:52.851700    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:39:52.863694    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:39:52.863771    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:39:52.886580    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:39:52.886672    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:39:52.898824    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:39:52.898910    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:39:52.910108    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:39:52.910195    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:39:52.921667    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:39:52.921761    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:39:52.934692    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:39:52.934779    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:39:52.945214    4370 logs.go:276] 0 containers: []
	W0917 02:39:52.945227    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:39:52.945303    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:39:52.957854    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:39:52.957871    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:39:52.957876    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:39:52.980050    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:39:52.980061    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:39:53.000845    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:39:53.000854    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:39:53.012315    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:39:53.012328    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:39:53.024882    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:39:53.024892    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:39:53.062544    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:39:53.062558    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:39:53.078085    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:39:53.078094    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:39:53.089976    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:39:53.089986    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:39:53.105234    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:39:53.105243    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:39:53.127797    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:39:53.127806    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:39:53.141965    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:39:53.141974    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:39:53.175406    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:39:53.175416    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:39:53.189860    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:39:53.189876    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:39:53.213548    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:39:53.213561    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:39:53.252324    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:39:53.252332    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:39:53.266190    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:39:53.266200    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:39:53.277359    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:39:53.277368    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:39:55.783425    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:00.784188    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:00.784314    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:00.796324    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:00.796417    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:00.808932    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:00.809024    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:00.821005    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:00.821102    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:00.832105    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:00.832191    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:00.843487    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:00.843574    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:00.855060    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:00.855151    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:00.865942    4370 logs.go:276] 0 containers: []
	W0917 02:40:00.865953    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:00.866024    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:00.880531    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:00.880549    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:00.880555    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:00.884983    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:00.884992    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:00.923098    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:00.923111    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:00.936871    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:00.936886    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:00.954983    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:00.954995    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:00.969926    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:00.969939    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:00.992197    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:00.992205    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:01.029802    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:01.029808    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:01.063414    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:01.063429    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:01.078066    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:01.078076    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:01.098703    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:01.098714    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:01.110430    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:01.110440    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:01.128713    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:01.128723    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:01.140153    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:01.140170    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:01.154336    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:01.154346    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:01.166130    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:01.166141    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:01.179270    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:01.179282    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:03.692711    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:08.695024    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:08.695124    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:08.706592    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:08.706675    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:08.717801    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:08.717893    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:08.734651    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:08.734734    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:08.752101    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:08.752190    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:08.762812    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:08.762900    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:08.773345    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:08.773434    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:08.784488    4370 logs.go:276] 0 containers: []
	W0917 02:40:08.784500    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:08.784572    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:08.794889    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:08.794907    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:08.794914    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:08.833523    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:08.833535    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:08.848982    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:08.848995    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:08.863586    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:08.863597    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:08.878209    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:08.878224    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:08.890801    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:08.890812    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:08.915701    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:08.915710    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:08.955299    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:08.955313    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:08.960095    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:08.960105    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:08.974995    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:08.975012    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:08.986513    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:08.986525    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:08.999570    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:08.999582    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:09.027319    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:09.027333    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:09.038569    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:09.038579    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:09.076539    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:09.076555    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:09.090015    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:09.090029    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:09.105078    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:09.105091    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:11.634177    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:16.636412    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:16.636549    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:16.647272    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:16.647361    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:16.657869    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:16.657958    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:16.669311    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:16.669395    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:16.682480    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:16.682561    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:16.700753    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:16.700837    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:16.711186    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:16.711264    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:16.725621    4370 logs.go:276] 0 containers: []
	W0917 02:40:16.725632    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:16.725707    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:16.736473    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:16.736493    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:16.736498    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:16.754393    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:16.754409    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:16.769166    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:16.769176    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:16.783638    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:16.783650    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:16.795207    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:16.795218    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:16.807314    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:16.807329    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:16.842942    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:16.842957    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:16.884953    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:16.884963    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:16.899345    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:16.899359    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:16.920396    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:16.920410    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:16.932780    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:16.932792    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:16.956325    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:16.956338    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:16.979060    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:16.979070    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:17.015637    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:17.015645    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:17.030140    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:17.030151    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:17.041724    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:17.041734    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:17.054766    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:17.054780    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:19.560831    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:24.563112    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:24.563239    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:24.573571    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:24.573662    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:24.584657    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:24.584746    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:24.594822    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:24.594904    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:24.606067    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:24.606146    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:24.621236    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:24.621313    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:24.636967    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:24.637055    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:24.646881    4370 logs.go:276] 0 containers: []
	W0917 02:40:24.646894    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:24.646968    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:24.657538    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:24.657554    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:24.657559    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:24.671819    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:24.671829    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:24.689073    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:24.689085    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:24.703038    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:24.703049    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:24.714307    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:24.714319    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:24.738147    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:24.738155    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:24.776424    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:24.776432    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:24.790028    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:24.790039    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:24.801528    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:24.801539    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:24.817055    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:24.817066    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:24.828555    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:24.828566    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:24.866774    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:24.866784    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:24.880680    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:24.880692    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:24.892018    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:24.892030    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:24.896336    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:24.896343    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:24.934334    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:24.934346    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:24.946103    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:24.946114    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:27.469171    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:32.471478    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:32.471647    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:32.484056    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:32.484146    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:32.494767    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:32.494842    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:32.505474    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:32.505559    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:32.516620    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:32.516702    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:32.526611    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:32.526690    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:32.537106    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:32.537197    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:32.547380    4370 logs.go:276] 0 containers: []
	W0917 02:40:32.547391    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:32.547465    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:32.557711    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:32.557744    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:32.557751    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:32.563249    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:32.563261    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:32.577181    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:32.577190    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:32.588208    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:32.588218    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:32.610440    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:32.610448    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:32.647503    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:32.647511    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:32.661289    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:32.661298    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:32.673011    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:32.673022    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:32.690168    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:32.690180    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:32.702710    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:32.702729    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:32.738702    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:32.738714    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:32.776900    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:32.776912    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:32.791182    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:32.791192    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:32.803434    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:32.803448    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:32.815450    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:32.815462    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:32.829863    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:32.829873    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:32.850893    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:32.850904    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:35.367886    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:40.370161    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:40.370352    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:40.383634    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:40.383723    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:40.394419    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:40.394511    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:40.405085    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:40.405166    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:40.420454    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:40.420540    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:40.433922    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:40.434001    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:40.444670    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:40.444750    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:40.455158    4370 logs.go:276] 0 containers: []
	W0917 02:40:40.455171    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:40.455245    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:40.469942    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:40.469959    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:40.469964    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:40.510652    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:40.510664    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:40.525375    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:40.525384    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:40.545933    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:40.545944    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:40.558212    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:40.558225    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:40.562920    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:40.562928    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:40.599211    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:40.599223    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:40.615375    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:40.615387    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:40.636780    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:40.636794    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:40.648231    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:40.648246    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:40.661851    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:40.661862    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:40.675838    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:40.675851    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:40.688406    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:40.688418    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:40.727621    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:40.727639    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:40.739700    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:40.739711    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:40.755057    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:40.755070    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:40.767078    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:40.767089    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:43.293288    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:48.295576    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:48.295763    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:48.310721    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:48.310818    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:48.322940    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:48.323028    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:48.334305    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:48.334398    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:48.344980    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:48.345063    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:48.355706    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:48.355795    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:48.366719    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:48.366802    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:48.376807    4370 logs.go:276] 0 containers: []
	W0917 02:40:48.376819    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:48.376884    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:48.387686    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:48.387703    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:48.387708    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:48.399888    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:48.399898    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:48.421642    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:48.421648    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:48.433398    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:48.433409    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:48.452080    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:48.452090    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:48.463927    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:48.463936    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:48.481163    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:48.481171    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:48.496014    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:48.496025    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:48.500497    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:48.500505    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:48.539260    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:48.539270    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:48.554198    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:48.554208    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:48.568997    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:48.569007    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:48.582771    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:48.582780    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:48.617925    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:48.617935    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:48.632081    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:48.632091    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:48.654540    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:48.654550    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:48.669604    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:48.669619    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:51.209543    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:40:56.211871    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:40:56.212035    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:40:56.224879    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:40:56.224975    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:40:56.238424    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:40:56.238508    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:40:56.249019    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:40:56.249102    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:40:56.258754    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:40:56.258828    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:40:56.269443    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:40:56.269532    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:40:56.279419    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:40:56.279499    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:40:56.289553    4370 logs.go:276] 0 containers: []
	W0917 02:40:56.289565    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:40:56.289633    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:40:56.300094    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:40:56.300111    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:40:56.300116    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:40:56.339578    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:40:56.339588    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:40:56.373525    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:40:56.373536    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:40:56.385525    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:40:56.385537    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:40:56.404723    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:40:56.404735    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:40:56.408793    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:40:56.408800    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:40:56.425844    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:40:56.425854    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:40:56.438223    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:40:56.438233    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:40:56.452237    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:40:56.452246    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:40:56.466285    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:40:56.466300    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:40:56.481437    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:40:56.481450    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:40:56.502099    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:40:56.502109    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:40:56.517068    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:40:56.517079    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:40:56.539762    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:40:56.539768    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:40:56.577458    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:40:56.577471    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:40:56.592034    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:40:56.592045    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:40:56.603858    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:40:56.603870    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:40:59.118189    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:04.120348    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:04.120463    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:41:04.131707    4370 logs.go:276] 2 containers: [d622083a8766 b1296b57ee41]
	I0917 02:41:04.131792    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:41:04.142545    4370 logs.go:276] 2 containers: [6c2edec40538 7b4b71b6f19a]
	I0917 02:41:04.142632    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:41:04.159548    4370 logs.go:276] 1 containers: [2e11cc45a43b]
	I0917 02:41:04.159637    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:41:04.170092    4370 logs.go:276] 2 containers: [2cacf4f4924e 637480f75136]
	I0917 02:41:04.170186    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:41:04.180744    4370 logs.go:276] 1 containers: [18201582dc6b]
	I0917 02:41:04.180822    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:41:04.191218    4370 logs.go:276] 2 containers: [7896abb917a2 5d12a44bd79e]
	I0917 02:41:04.191306    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:41:04.201664    4370 logs.go:276] 0 containers: []
	W0917 02:41:04.201682    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:41:04.201764    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:41:04.212120    4370 logs.go:276] 2 containers: [3580174f4ef8 800a9ed53592]
	I0917 02:41:04.212137    4370 logs.go:123] Gathering logs for coredns [2e11cc45a43b] ...
	I0917 02:41:04.212143    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e11cc45a43b"
	I0917 02:41:04.224158    4370 logs.go:123] Gathering logs for kube-scheduler [637480f75136] ...
	I0917 02:41:04.224170    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 637480f75136"
	I0917 02:41:04.245139    4370 logs.go:123] Gathering logs for storage-provisioner [3580174f4ef8] ...
	I0917 02:41:04.245151    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3580174f4ef8"
	I0917 02:41:04.256339    4370 logs.go:123] Gathering logs for kube-controller-manager [7896abb917a2] ...
	I0917 02:41:04.256349    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7896abb917a2"
	I0917 02:41:04.273131    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:41:04.273141    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:41:04.286293    4370 logs.go:123] Gathering logs for kube-apiserver [d622083a8766] ...
	I0917 02:41:04.286304    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d622083a8766"
	I0917 02:41:04.301858    4370 logs.go:123] Gathering logs for kube-apiserver [b1296b57ee41] ...
	I0917 02:41:04.301868    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1296b57ee41"
	I0917 02:41:04.345531    4370 logs.go:123] Gathering logs for etcd [6c2edec40538] ...
	I0917 02:41:04.345544    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c2edec40538"
	I0917 02:41:04.359481    4370 logs.go:123] Gathering logs for kube-scheduler [2cacf4f4924e] ...
	I0917 02:41:04.359491    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cacf4f4924e"
	I0917 02:41:04.371140    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:41:04.371151    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:41:04.408773    4370 logs.go:123] Gathering logs for etcd [7b4b71b6f19a] ...
	I0917 02:41:04.408780    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b4b71b6f19a"
	I0917 02:41:04.422915    4370 logs.go:123] Gathering logs for storage-provisioner [800a9ed53592] ...
	I0917 02:41:04.422931    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 800a9ed53592"
	I0917 02:41:04.434470    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:41:04.434481    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:41:04.458416    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:41:04.458427    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:41:04.462616    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:41:04.462625    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:41:04.497479    4370 logs.go:123] Gathering logs for kube-proxy [18201582dc6b] ...
	I0917 02:41:04.497493    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18201582dc6b"
	I0917 02:41:04.509194    4370 logs.go:123] Gathering logs for kube-controller-manager [5d12a44bd79e] ...
	I0917 02:41:04.509205    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d12a44bd79e"
	I0917 02:41:07.026014    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:12.028048    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:12.028118    4370 kubeadm.go:597] duration metric: took 4m4.002254959s to restartPrimaryControlPlane
	W0917 02:41:12.028166    4370 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0917 02:41:12.028183    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0917 02:41:13.028261    4370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 02:41:13.033574    4370 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 02:41:13.036467    4370 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 02:41:13.039044    4370 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 02:41:13.039051    4370 kubeadm.go:157] found existing configuration files:
	
	I0917 02:41:13.039080    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf
	I0917 02:41:13.041359    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 02:41:13.041381    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 02:41:13.044079    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf
	I0917 02:41:13.046537    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 02:41:13.046564    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 02:41:13.049330    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf
	I0917 02:41:13.052507    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 02:41:13.052534    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 02:41:13.055239    4370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf
	I0917 02:41:13.057555    4370 kubeadm.go:163] "https://control-plane.minikube.internal:50506" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50506 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 02:41:13.057579    4370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 02:41:13.060728    4370 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 02:41:13.079571    4370 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0917 02:41:13.079606    4370 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 02:41:13.128134    4370 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 02:41:13.128186    4370 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 02:41:13.128228    4370 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0917 02:41:13.178739    4370 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 02:41:13.182964    4370 out.go:235]   - Generating certificates and keys ...
	I0917 02:41:13.182998    4370 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 02:41:13.183031    4370 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 02:41:13.183067    4370 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0917 02:41:13.183098    4370 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0917 02:41:13.183142    4370 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0917 02:41:13.183170    4370 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0917 02:41:13.183201    4370 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0917 02:41:13.183231    4370 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0917 02:41:13.183270    4370 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0917 02:41:13.183335    4370 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0917 02:41:13.183357    4370 kubeadm.go:310] [certs] Using the existing "sa" key
	I0917 02:41:13.183404    4370 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 02:41:13.265303    4370 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 02:41:13.482428    4370 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 02:41:13.602433    4370 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 02:41:13.667495    4370 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 02:41:13.696680    4370 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 02:41:13.696734    4370 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 02:41:13.696756    4370 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 02:41:13.771308    4370 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 02:41:13.775536    4370 out.go:235]   - Booting up control plane ...
	I0917 02:41:13.775602    4370 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 02:41:13.775642    4370 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 02:41:13.775685    4370 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 02:41:13.775744    4370 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 02:41:13.775880    4370 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0917 02:41:18.274698    4370 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501391 seconds
	I0917 02:41:18.274768    4370 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 02:41:18.278760    4370 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 02:41:18.803486    4370 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 02:41:18.803895    4370 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-288000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 02:41:19.307183    4370 kubeadm.go:310] [bootstrap-token] Using token: 4vsdjq.4qj5uidod7poi6do
	I0917 02:41:19.310970    4370 out.go:235]   - Configuring RBAC rules ...
	I0917 02:41:19.311037    4370 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 02:41:19.311084    4370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 02:41:19.315594    4370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 02:41:19.317049    4370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 02:41:19.318035    4370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 02:41:19.319115    4370 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 02:41:19.322539    4370 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 02:41:19.477319    4370 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 02:41:19.712895    4370 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 02:41:19.713060    4370 kubeadm.go:310] 
	I0917 02:41:19.713094    4370 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 02:41:19.713096    4370 kubeadm.go:310] 
	I0917 02:41:19.713143    4370 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 02:41:19.713147    4370 kubeadm.go:310] 
	I0917 02:41:19.713162    4370 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 02:41:19.713221    4370 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 02:41:19.713253    4370 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 02:41:19.713258    4370 kubeadm.go:310] 
	I0917 02:41:19.713286    4370 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 02:41:19.713291    4370 kubeadm.go:310] 
	I0917 02:41:19.713314    4370 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 02:41:19.713317    4370 kubeadm.go:310] 
	I0917 02:41:19.713343    4370 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 02:41:19.713380    4370 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 02:41:19.713422    4370 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 02:41:19.713431    4370 kubeadm.go:310] 
	I0917 02:41:19.713476    4370 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 02:41:19.713517    4370 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 02:41:19.713521    4370 kubeadm.go:310] 
	I0917 02:41:19.713560    4370 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4vsdjq.4qj5uidod7poi6do \
	I0917 02:41:19.713613    4370 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3105cdadd1e1eaa420c61face26906cf5212dd9c9efeb8ef9725bc0a50fd268d \
	I0917 02:41:19.713627    4370 kubeadm.go:310] 	--control-plane 
	I0917 02:41:19.713631    4370 kubeadm.go:310] 
	I0917 02:41:19.713683    4370 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 02:41:19.713686    4370 kubeadm.go:310] 
	I0917 02:41:19.713728    4370 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4vsdjq.4qj5uidod7poi6do \
	I0917 02:41:19.713779    4370 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3105cdadd1e1eaa420c61face26906cf5212dd9c9efeb8ef9725bc0a50fd268d 
	I0917 02:41:19.714024    4370 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 02:41:19.714034    4370 cni.go:84] Creating CNI manager for ""
	I0917 02:41:19.714065    4370 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:41:19.721100    4370 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 02:41:19.725125    4370 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 02:41:19.728291    4370 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 02:41:19.733012    4370 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 02:41:19.733072    4370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 02:41:19.733086    4370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-288000 minikube.k8s.io/updated_at=2024_09_17T02_41_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=9256ba43b41ea130fa48757ddb8d93db00574f61 minikube.k8s.io/name=stopped-upgrade-288000 minikube.k8s.io/primary=true
	I0917 02:41:19.775257    4370 ops.go:34] apiserver oom_adj: -16
	I0917 02:41:19.775319    4370 kubeadm.go:1113] duration metric: took 42.289209ms to wait for elevateKubeSystemPrivileges
	I0917 02:41:19.775331    4370 kubeadm.go:394] duration metric: took 4m11.762931708s to StartCluster
	I0917 02:41:19.775343    4370 settings.go:142] acquiring lock: {Name:mk2d861f3b7e502753ec34b4d96136a66d57e5dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:41:19.775439    4370 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:41:19.775908    4370 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/kubeconfig: {Name:mkb79e559d17024b096623143f764244ebf5b237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:41:19.776118    4370 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:41:19.776204    4370 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:41:19.776182    4370 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 02:41:19.776246    4370 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-288000"
	I0917 02:41:19.776255    4370 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-288000"
	W0917 02:41:19.776261    4370 addons.go:243] addon storage-provisioner should already be in state true
	I0917 02:41:19.776264    4370 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-288000"
	I0917 02:41:19.776270    4370 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-288000"
	I0917 02:41:19.776272    4370 host.go:66] Checking if "stopped-upgrade-288000" exists ...
	I0917 02:41:19.777235    4370 kapi.go:59] client config for stopped-upgrade-288000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/stopped-upgrade-288000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106395800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 02:41:19.777358    4370 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-288000"
	W0917 02:41:19.777363    4370 addons.go:243] addon default-storageclass should already be in state true
	I0917 02:41:19.777376    4370 host.go:66] Checking if "stopped-upgrade-288000" exists ...
	I0917 02:41:19.780103    4370 out.go:177] * Verifying Kubernetes components...
	I0917 02:41:19.780440    4370 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 02:41:19.784323    4370 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 02:41:19.784330    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	I0917 02:41:19.788001    4370 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 02:41:19.792134    4370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 02:41:19.793202    4370 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 02:41:19.793207    4370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 02:41:19.793211    4370 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/stopped-upgrade-288000/id_rsa Username:docker}
	I0917 02:41:19.867682    4370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 02:41:19.873159    4370 api_server.go:52] waiting for apiserver process to appear ...
	I0917 02:41:19.873215    4370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 02:41:19.877163    4370 api_server.go:72] duration metric: took 101.034708ms to wait for apiserver process to appear ...
	I0917 02:41:19.877170    4370 api_server.go:88] waiting for apiserver healthz status ...
	I0917 02:41:19.877177    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:19.903449    4370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 02:41:19.919049    4370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 02:41:20.235871    4370 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 02:41:20.235882    4370 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 02:41:24.879243    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:24.879302    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:29.879740    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:29.879777    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:34.880147    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:34.880171    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:39.880612    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:39.880636    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:44.881215    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:44.881236    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:49.881988    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:49.882019    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0917 02:41:50.238041    4370 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0917 02:41:50.243406    4370 out.go:177] * Enabled addons: storage-provisioner
	I0917 02:41:50.251257    4370 addons.go:510] duration metric: took 30.475315334s for enable addons: enabled=[storage-provisioner]
	I0917 02:41:54.882966    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:54.883008    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:41:59.884335    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:41:59.884358    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:42:04.885926    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:42:04.885980    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:42:09.887954    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:42:09.887976    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:42:14.889582    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:42:14.889623    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:42:19.891813    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:42:19.891935    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:42:19.903411    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:42:19.903513    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:42:19.952147    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:42:19.952246    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:42:19.977941    4370 logs.go:276] 2 containers: [284709a80c36 b8158def61e3]
	I0917 02:42:19.978030    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:42:19.988513    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:42:19.988605    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:42:19.998972    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:42:19.999063    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:42:20.009074    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:42:20.009159    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:42:20.019475    4370 logs.go:276] 0 containers: []
	W0917 02:42:20.019488    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:42:20.019581    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:42:20.029683    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:42:20.029698    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:42:20.029703    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:42:20.068683    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:42:20.068692    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:42:20.106592    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:42:20.106607    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:42:20.118190    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:42:20.118203    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:42:20.129482    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:42:20.129499    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:42:20.141974    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:42:20.141989    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:42:20.161554    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:42:20.161564    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:42:20.185133    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:42:20.185141    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:42:20.196149    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:42:20.196171    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:42:20.200716    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:42:20.200723    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:42:20.219585    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:42:20.219595    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:42:20.235117    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:42:20.235128    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:42:20.250328    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:42:20.250337    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:42:22.765100    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:42:27.767797    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:42:27.768369    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:42:27.810937    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:42:27.811109    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:42:27.831834    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:42:27.831960    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:42:27.847254    4370 logs.go:276] 2 containers: [284709a80c36 b8158def61e3]
	I0917 02:42:27.847344    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:42:27.860432    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:42:27.860523    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:42:27.870631    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:42:27.870714    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:42:27.880965    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:42:27.881041    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:42:27.890994    4370 logs.go:276] 0 containers: []
	W0917 02:42:27.891005    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:42:27.891074    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:42:27.901065    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:42:27.901079    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:42:27.901084    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:42:27.915168    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:42:27.915181    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:42:27.929429    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:42:27.929438    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:42:27.946447    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:42:27.946458    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:42:27.969673    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:42:27.969681    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:42:27.973588    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:42:27.973596    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:42:28.007728    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:42:28.007739    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:42:28.022367    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:42:28.022378    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:42:28.033882    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:42:28.033892    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:42:28.049655    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:42:28.049665    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:42:28.060855    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:42:28.060867    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:42:28.072565    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:42:28.072574    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:42:28.084107    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:42:28.084121    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:42:30.622789    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:42:35.625196    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:42:35.625692    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:42:35.657417    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:42:35.657555    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:42:35.675554    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:42:35.675672    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:42:35.691349    4370 logs.go:276] 2 containers: [284709a80c36 b8158def61e3]
	I0917 02:42:35.691443    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:42:35.704179    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:42:35.704250    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:42:35.714707    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:42:35.714776    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:42:35.725172    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:42:35.725255    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:42:35.735665    4370 logs.go:276] 0 containers: []
	W0917 02:42:35.735680    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:42:35.735751    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:42:35.746293    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:42:35.746307    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:42:35.746313    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:42:35.783960    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:42:35.783969    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:42:35.798538    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:42:35.798547    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:42:35.809964    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:42:35.809975    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:42:35.824239    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:42:35.824249    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:42:35.836118    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:42:35.836128    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:42:35.860992    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:42:35.861000    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:42:35.872156    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:42:35.872167    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:42:35.876665    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:42:35.876673    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:42:35.909966    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:42:35.909976    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:42:35.923924    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:42:35.923933    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:42:35.935315    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:42:35.935327    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:42:35.947206    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:42:35.947215    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:42:38.467303    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:42:43.469236    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:42:43.469471    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:42:43.499231    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:42:43.499375    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:42:43.516751    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:42:43.516861    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:42:43.530275    4370 logs.go:276] 2 containers: [284709a80c36 b8158def61e3]
	I0917 02:42:43.530366    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:42:43.541795    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:42:43.541878    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:42:43.552108    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:42:43.552190    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:42:43.567899    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:42:43.567985    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:42:43.591248    4370 logs.go:276] 0 containers: []
	W0917 02:42:43.591257    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:42:43.591321    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:42:43.602038    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:42:43.602053    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:42:43.602059    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:42:43.606600    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:42:43.606606    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:42:43.640139    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:42:43.640151    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:42:43.657497    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:42:43.657507    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:42:43.680767    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:42:43.680774    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:42:43.692370    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:42:43.692380    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:42:43.710376    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:42:43.710388    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:42:43.721372    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:42:43.721385    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:42:43.757500    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:42:43.757510    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:42:43.771474    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:42:43.771485    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:42:43.783202    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:42:43.783214    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:42:43.795087    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:42:43.795097    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:42:43.809209    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:42:43.809218    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:42:46.323371    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:42:51.326021    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:42:51.326319    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:42:51.349534    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:42:51.349668    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:42:51.366219    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:42:51.366329    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:42:51.383191    4370 logs.go:276] 2 containers: [284709a80c36 b8158def61e3]
	I0917 02:42:51.383280    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:42:51.393638    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:42:51.393716    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:42:51.405136    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:42:51.405216    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:42:51.416132    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:42:51.416216    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:42:51.426224    4370 logs.go:276] 0 containers: []
	W0917 02:42:51.426236    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:42:51.426305    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:42:51.436576    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:42:51.436595    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:42:51.436600    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:42:51.454070    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:42:51.454080    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:42:51.466144    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:42:51.466155    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:42:51.490940    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:42:51.490948    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:42:51.502702    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:42:51.502719    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:42:51.515098    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:42:51.515108    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:42:51.520015    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:42:51.520024    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:42:51.555456    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:42:51.555468    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:42:51.573321    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:42:51.573337    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:42:51.587033    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:42:51.587044    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:42:51.598743    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:42:51.598753    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:42:51.610087    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:42:51.610098    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:42:51.624684    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:42:51.624693    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:42:54.166808    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:42:59.169223    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:42:59.169752    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:42:59.207239    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:42:59.207408    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:42:59.227900    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:42:59.228030    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:42:59.243853    4370 logs.go:276] 2 containers: [284709a80c36 b8158def61e3]
	I0917 02:42:59.243935    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:42:59.258777    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:42:59.258867    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:42:59.270462    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:42:59.270555    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:42:59.280951    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:42:59.281029    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:42:59.291546    4370 logs.go:276] 0 containers: []
	W0917 02:42:59.291563    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:42:59.291644    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:42:59.302158    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:42:59.302173    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:42:59.302179    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:42:59.320340    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:42:59.320351    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:42:59.343707    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:42:59.343713    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:42:59.377567    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:42:59.377579    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:42:59.392238    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:42:59.392247    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:42:59.403986    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:42:59.403995    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:42:59.415578    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:42:59.415589    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:42:59.430196    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:42:59.430206    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:42:59.441811    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:42:59.441823    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:42:59.479602    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:42:59.479612    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:42:59.484113    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:42:59.484123    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:42:59.499063    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:42:59.499073    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:42:59.511031    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:42:59.511042    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:43:02.028898    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:43:07.030670    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:43:07.031198    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:43:07.067978    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:43:07.068143    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:43:07.087528    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:43:07.087659    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:43:07.102013    4370 logs.go:276] 2 containers: [284709a80c36 b8158def61e3]
	I0917 02:43:07.102106    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:43:07.113993    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:43:07.114081    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:43:07.124606    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:43:07.124690    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:43:07.135112    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:43:07.135199    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:43:07.145604    4370 logs.go:276] 0 containers: []
	W0917 02:43:07.145617    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:43:07.145713    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:43:07.156401    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:43:07.156419    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:43:07.156425    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:43:07.168097    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:43:07.168107    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:43:07.179993    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:43:07.180005    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:43:07.191296    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:43:07.191307    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:43:07.206436    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:43:07.206449    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:43:07.218029    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:43:07.218043    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:43:07.234025    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:43:07.234035    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:43:07.250703    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:43:07.250715    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:43:07.289308    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:43:07.289315    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:43:07.293303    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:43:07.293311    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:43:07.328281    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:43:07.328297    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:43:07.343276    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:43:07.343288    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:43:07.367088    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:43:07.367102    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:43:09.879118    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:43:14.879671    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:43:14.879906    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:43:14.904319    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:43:14.904455    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:43:14.921748    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:43:14.921850    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:43:14.934229    4370 logs.go:276] 2 containers: [284709a80c36 b8158def61e3]
	I0917 02:43:14.934311    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:43:14.944895    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:43:14.944980    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:43:14.954614    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:43:14.954688    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:43:14.965005    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:43:14.965086    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:43:14.975001    4370 logs.go:276] 0 containers: []
	W0917 02:43:14.975017    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:43:14.975086    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:43:14.985306    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:43:14.985322    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:43:14.985327    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:43:14.999402    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:43:14.999412    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:43:15.016712    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:43:15.016723    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:43:15.053508    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:43:15.053518    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:43:15.087463    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:43:15.087474    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:43:15.104170    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:43:15.104183    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:43:15.116084    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:43:15.116094    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:43:15.134194    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:43:15.134205    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:43:15.152123    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:43:15.152132    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:43:15.156382    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:43:15.156390    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:43:15.170411    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:43:15.170425    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:43:15.182181    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:43:15.182191    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:43:15.193706    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:43:15.193716    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:43:17.720560    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:43:22.722871    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:43:22.723152    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:43:22.740772    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:43:22.740877    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:43:22.753571    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:43:22.753659    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:43:22.764604    4370 logs.go:276] 2 containers: [284709a80c36 b8158def61e3]
	I0917 02:43:22.764685    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:43:22.775113    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:43:22.775208    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:43:22.785378    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:43:22.785465    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:43:22.796080    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:43:22.796159    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:43:22.805970    4370 logs.go:276] 0 containers: []
	W0917 02:43:22.805983    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:43:22.806048    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:43:22.819076    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:43:22.819092    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:43:22.819097    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:43:22.858141    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:43:22.858149    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:43:22.862799    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:43:22.862807    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:43:22.904078    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:43:22.904089    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:43:22.919470    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:43:22.919481    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:43:22.932951    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:43:22.932960    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:43:22.946996    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:43:22.947005    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:43:22.965562    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:43:22.965572    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:43:22.977103    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:43:22.977113    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:43:22.988476    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:43:22.988488    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:43:23.009304    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:43:23.009315    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:43:23.020731    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:43:23.020741    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:43:23.044311    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:43:23.044321    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:43:25.557444    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:43:30.559968    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:43:30.560459    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:43:30.595097    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:43:30.595262    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:43:30.615805    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:43:30.615937    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:43:30.632556    4370 logs.go:276] 2 containers: [284709a80c36 b8158def61e3]
	I0917 02:43:30.632651    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:43:30.649300    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:43:30.649386    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:43:30.659785    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:43:30.659872    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:43:30.675636    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:43:30.675715    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:43:30.686900    4370 logs.go:276] 0 containers: []
	W0917 02:43:30.686915    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:43:30.686990    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:43:30.697695    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:43:30.697711    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:43:30.697718    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:43:30.733892    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:43:30.733901    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:43:30.737826    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:43:30.737835    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:43:30.772523    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:43:30.772535    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:43:30.787420    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:43:30.787435    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:43:30.801968    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:43:30.801977    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:43:30.816332    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:43:30.816341    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:43:30.828086    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:43:30.828097    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:43:30.846015    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:43:30.846027    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:43:30.857975    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:43:30.857984    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:43:30.869928    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:43:30.869939    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:43:30.881410    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:43:30.881420    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:43:30.905749    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:43:30.905757    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:43:33.419200    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:43:38.421983    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:43:38.422556    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:43:38.465418    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:43:38.465588    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:43:38.486603    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:43:38.486723    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:43:38.501671    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:43:38.501767    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:43:38.514134    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:43:38.514221    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:43:38.524659    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:43:38.524732    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:43:38.535492    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:43:38.535581    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:43:38.546221    4370 logs.go:276] 0 containers: []
	W0917 02:43:38.546235    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:43:38.546307    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:43:38.556911    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:43:38.556936    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:43:38.556942    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:43:38.572786    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:43:38.572800    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:43:38.590312    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:43:38.590321    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:43:38.616557    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:43:38.616566    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:43:38.655470    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:43:38.655481    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:43:38.659924    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:43:38.659932    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:43:38.673909    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:43:38.673919    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:43:38.685311    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:43:38.685323    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:43:38.700037    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:43:38.700047    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:43:38.734929    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:43:38.734942    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:43:38.752084    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:43:38.752096    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:43:38.763246    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:43:38.763257    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:43:38.774700    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:43:38.774710    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:43:38.786192    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:43:38.786200    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:43:38.797946    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:43:38.797957    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:43:41.311462    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:43:46.314468    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:43:46.314558    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:43:46.327309    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:43:46.327385    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:43:46.339105    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:43:46.339163    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:43:46.350249    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:43:46.350324    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:43:46.362460    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:43:46.362544    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:43:46.374427    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:43:46.374514    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:43:46.394490    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:43:46.394574    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:43:46.406278    4370 logs.go:276] 0 containers: []
	W0917 02:43:46.406290    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:43:46.406370    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:43:46.418766    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:43:46.418785    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:43:46.418791    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:43:46.434953    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:43:46.434972    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:43:46.448220    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:43:46.448234    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:43:46.461233    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:43:46.461246    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:43:46.476882    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:43:46.476898    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:43:46.515982    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:43:46.515994    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:43:46.531191    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:43:46.531203    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:43:46.543399    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:43:46.543410    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:43:46.555470    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:43:46.555481    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:43:46.581443    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:43:46.581451    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:43:46.586020    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:43:46.586026    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:43:46.603601    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:43:46.603611    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:43:46.616701    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:43:46.616710    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:43:46.628961    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:43:46.628972    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:43:46.641273    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:43:46.641283    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:43:49.179912    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:43:54.182596    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:43:54.182734    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:43:54.196307    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:43:54.196391    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:43:54.207355    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:43:54.207440    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:43:54.217512    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:43:54.217584    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:43:54.228497    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:43:54.228577    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:43:54.239032    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:43:54.239105    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:43:54.249256    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:43:54.249338    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:43:54.259062    4370 logs.go:276] 0 containers: []
	W0917 02:43:54.259074    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:43:54.259141    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:43:54.269024    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:43:54.269041    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:43:54.269047    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:43:54.273349    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:43:54.273357    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:43:54.288466    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:43:54.288477    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:43:54.299601    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:43:54.299612    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:43:54.311061    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:43:54.311072    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:43:54.325125    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:43:54.325135    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:43:54.336198    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:43:54.336210    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:43:54.347297    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:43:54.347306    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:43:54.358894    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:43:54.358905    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:43:54.377602    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:43:54.377612    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:43:54.400125    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:43:54.400135    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:43:54.423753    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:43:54.423760    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:43:54.460031    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:43:54.460037    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:43:54.493199    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:43:54.493215    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:43:54.507612    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:43:54.507622    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:43:57.021224    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:44:02.023464    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:44:02.024096    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:44:02.064223    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:44:02.064384    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:44:02.086576    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:44:02.086704    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:44:02.101433    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:44:02.101534    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:44:02.119056    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:44:02.119146    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:44:02.138387    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:44:02.138473    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:44:02.149058    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:44:02.149146    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:44:02.159766    4370 logs.go:276] 0 containers: []
	W0917 02:44:02.159777    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:44:02.159849    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:44:02.171857    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:44:02.171875    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:44:02.171881    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:44:02.188681    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:44:02.188692    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:44:02.202829    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:44:02.202842    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:44:02.218867    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:44:02.218876    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:44:02.257028    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:44:02.257037    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:44:02.268312    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:44:02.268326    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:44:02.280043    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:44:02.280054    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:44:02.294405    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:44:02.294416    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:44:02.308772    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:44:02.308785    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:44:02.346980    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:44:02.346992    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:44:02.358899    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:44:02.358913    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:44:02.371065    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:44:02.371074    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:44:02.388533    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:44:02.388543    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:44:02.399869    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:44:02.399886    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:44:02.424793    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:44:02.424802    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:44:04.930882    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:44:09.933123    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:44:09.933185    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:44:09.944834    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:44:09.944912    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:44:09.955987    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:44:09.956054    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:44:09.974024    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:44:09.974099    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:44:09.984781    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:44:09.984861    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:44:09.996007    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:44:09.996085    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:44:10.008374    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:44:10.008465    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:44:10.019667    4370 logs.go:276] 0 containers: []
	W0917 02:44:10.019676    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:44:10.019732    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:44:10.032028    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:44:10.032048    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:44:10.032055    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:44:10.044288    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:44:10.044299    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:44:10.063092    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:44:10.063101    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:44:10.075260    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:44:10.075273    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:44:10.087380    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:44:10.087390    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:44:10.103061    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:44:10.103071    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:44:10.115580    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:44:10.115588    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:44:10.154866    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:44:10.154876    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:44:10.194793    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:44:10.194802    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:44:10.209347    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:44:10.209358    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:44:10.221981    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:44:10.221993    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:44:10.227209    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:44:10.227220    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:44:10.242663    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:44:10.242671    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:44:10.260763    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:44:10.260775    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:44:10.287329    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:44:10.287340    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:44:12.801487    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:44:17.802280    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:44:17.802692    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:44:17.831945    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:44:17.832090    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:44:17.850820    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:44:17.850925    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:44:17.864788    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:44:17.864873    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:44:17.876758    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:44:17.876827    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:44:17.886713    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:44:17.886775    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:44:17.897922    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:44:17.898004    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:44:17.912558    4370 logs.go:276] 0 containers: []
	W0917 02:44:17.912568    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:44:17.912633    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:44:17.922751    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:44:17.922766    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:44:17.922772    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:44:17.934354    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:44:17.934364    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:44:17.951949    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:44:17.951957    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:44:17.975447    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:44:17.975455    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:44:18.011469    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:44:18.011477    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:44:18.059631    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:44:18.059644    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:44:18.076334    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:44:18.076348    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:44:18.094225    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:44:18.094236    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:44:18.098812    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:44:18.098818    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:44:18.110656    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:44:18.110669    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:44:18.125427    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:44:18.125440    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:44:18.145243    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:44:18.145257    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:44:18.157529    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:44:18.157543    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:44:18.173029    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:44:18.173044    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:44:18.187426    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:44:18.187436    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:44:20.700918    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:44:25.703179    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:44:25.703662    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:44:25.733660    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:44:25.733803    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:44:25.752554    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:44:25.752648    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:44:25.766781    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:44:25.766886    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:44:25.779126    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:44:25.779215    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:44:25.791740    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:44:25.791818    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:44:25.802138    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:44:25.802219    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:44:25.812433    4370 logs.go:276] 0 containers: []
	W0917 02:44:25.812443    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:44:25.812501    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:44:25.823035    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:44:25.823049    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:44:25.823054    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:44:25.863165    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:44:25.863174    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:44:25.875519    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:44:25.875535    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:44:25.890352    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:44:25.890362    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:44:25.914861    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:44:25.914870    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:44:25.921713    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:44:25.921720    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:44:25.933528    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:44:25.933545    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:44:25.945315    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:44:25.945324    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:44:25.956842    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:44:25.956854    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:44:25.968745    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:44:25.968755    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:44:26.008842    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:44:26.008857    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:44:26.023964    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:44:26.023976    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:44:26.038330    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:44:26.038347    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:44:26.050141    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:44:26.050152    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:44:26.063056    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:44:26.063067    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:44:28.584382    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:44:33.586007    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:44:33.586083    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:44:33.601469    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:44:33.601541    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:44:33.615400    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:44:33.615459    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:44:33.626506    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:44:33.626586    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:44:33.637976    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:44:33.638045    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:44:33.650837    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:44:33.650897    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:44:33.662687    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:44:33.662752    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:44:33.672753    4370 logs.go:276] 0 containers: []
	W0917 02:44:33.672765    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:44:33.672828    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:44:33.685773    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:44:33.685803    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:44:33.685814    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:44:33.703759    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:44:33.703770    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:44:33.717786    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:44:33.717798    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:44:33.730231    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:44:33.730243    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:44:33.743469    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:44:33.743482    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:44:33.755418    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:44:33.755426    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:44:33.759755    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:44:33.759761    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:44:33.772209    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:44:33.772221    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:44:33.787911    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:44:33.787928    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:44:33.800191    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:44:33.800203    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:44:33.825098    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:44:33.825119    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:44:33.839703    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:44:33.839713    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:44:33.879273    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:44:33.879291    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:44:33.917495    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:44:33.917526    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:44:33.935786    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:44:33.935794    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:44:36.449906    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:44:41.452817    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:44:41.453357    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:44:41.493724    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:44:41.493902    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:44:41.515488    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:44:41.515615    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:44:41.531266    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:44:41.531358    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:44:41.543953    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:44:41.544042    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:44:41.554352    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:44:41.554438    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:44:41.566662    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:44:41.566747    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:44:41.576464    4370 logs.go:276] 0 containers: []
	W0917 02:44:41.576475    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:44:41.576538    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:44:41.586745    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:44:41.586763    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:44:41.586769    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:44:41.604033    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:44:41.604042    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:44:41.627152    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:44:41.627160    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:44:41.638124    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:44:41.638136    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:44:41.674706    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:44:41.674713    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:44:41.688024    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:44:41.688040    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:44:41.699706    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:44:41.699716    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:44:41.711276    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:44:41.711291    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:44:41.725347    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:44:41.725356    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:44:41.736820    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:44:41.736830    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:44:41.754708    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:44:41.754717    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:44:41.758971    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:44:41.758979    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:44:41.794507    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:44:41.794516    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:44:41.809146    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:44:41.809156    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:44:41.821138    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:44:41.821150    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:44:44.333149    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:44:49.335887    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:44:49.336433    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:44:49.377848    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:44:49.378048    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:44:49.397977    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:44:49.398092    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:44:49.412750    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:44:49.412832    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:44:49.425466    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:44:49.425550    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:44:49.436342    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:44:49.436421    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:44:49.446844    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:44:49.446923    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:44:49.456641    4370 logs.go:276] 0 containers: []
	W0917 02:44:49.456653    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:44:49.456717    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:44:49.467141    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:44:49.467157    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:44:49.467163    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:44:49.478555    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:44:49.478567    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:44:49.516597    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:44:49.516605    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:44:49.521300    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:44:49.521307    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:44:49.533161    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:44:49.533174    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:44:49.544579    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:44:49.544588    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:44:49.558831    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:44:49.558848    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:44:49.570613    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:44:49.570624    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:44:49.582682    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:44:49.582693    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:44:49.593716    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:44:49.593726    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:44:49.618362    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:44:49.618370    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:44:49.629748    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:44:49.629761    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:44:49.664558    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:44:49.664570    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:44:49.678408    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:44:49.678421    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:44:49.693101    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:44:49.693109    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:44:52.216104    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:44:57.218238    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:44:57.218344    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:44:57.230177    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:44:57.230242    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:44:57.241028    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:44:57.241101    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:44:57.251775    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:44:57.251851    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:44:57.264076    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:44:57.264175    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:44:57.279802    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:44:57.279870    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:44:57.291416    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:44:57.291512    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:44:57.302212    4370 logs.go:276] 0 containers: []
	W0917 02:44:57.302225    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:44:57.302304    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:44:57.314258    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:44:57.314275    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:44:57.314281    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:44:57.329739    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:44:57.329751    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:44:57.342405    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:44:57.342417    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:44:57.361493    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:44:57.361509    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:44:57.375690    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:44:57.375704    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:44:57.391269    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:44:57.391284    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:44:57.403668    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:44:57.403684    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:44:57.415925    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:44:57.415937    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:44:57.455975    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:44:57.455993    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:44:57.473099    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:44:57.473112    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:44:57.486112    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:44:57.486129    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:44:57.490834    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:44:57.490845    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:44:57.528496    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:44:57.528504    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:44:57.540785    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:44:57.540798    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:44:57.567377    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:44:57.567387    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:45:00.081821    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:45:05.084233    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:45:05.084540    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:45:05.101800    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:45:05.101898    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:45:05.115380    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:45:05.115475    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:45:05.126214    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:45:05.126299    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:45:05.136951    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:45:05.137026    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:45:05.147256    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:45:05.147329    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:45:05.157938    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:45:05.158012    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:45:05.167954    4370 logs.go:276] 0 containers: []
	W0917 02:45:05.167968    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:45:05.168035    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:45:05.178139    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:45:05.178157    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:45:05.178162    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:45:05.214181    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:45:05.214191    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:45:05.228223    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:45:05.228233    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:45:05.240158    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:45:05.240172    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:45:05.254729    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:45:05.254742    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:45:05.266095    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:45:05.266110    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:45:05.270190    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:45:05.270199    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:45:05.303165    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:45:05.303176    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:45:05.317924    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:45:05.317934    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:45:05.341657    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:45:05.341668    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:45:05.358930    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:45:05.358940    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:45:05.370323    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:45:05.370344    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:45:05.382083    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:45:05.382094    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:45:05.405014    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:45:05.405023    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:45:05.416119    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:45:05.416128    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:45:07.933025    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:45:12.935921    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:45:12.936536    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0917 02:45:12.978016    4370 logs.go:276] 1 containers: [6383e3c1b923]
	I0917 02:45:12.978180    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0917 02:45:12.999537    4370 logs.go:276] 1 containers: [fdf41b56689f]
	I0917 02:45:12.999660    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0917 02:45:13.017202    4370 logs.go:276] 4 containers: [f82f063f81d3 66de5532686a 284709a80c36 b8158def61e3]
	I0917 02:45:13.017296    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0917 02:45:13.031639    4370 logs.go:276] 1 containers: [3c8c47901c29]
	I0917 02:45:13.031724    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0917 02:45:13.042190    4370 logs.go:276] 1 containers: [d54edfa778d4]
	I0917 02:45:13.042276    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0917 02:45:13.054522    4370 logs.go:276] 1 containers: [c9ac43bd42f2]
	I0917 02:45:13.054598    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0917 02:45:13.064799    4370 logs.go:276] 0 containers: []
	W0917 02:45:13.064813    4370 logs.go:278] No container was found matching "kindnet"
	I0917 02:45:13.064885    4370 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0917 02:45:13.075677    4370 logs.go:276] 1 containers: [82ac45ca132e]
	I0917 02:45:13.075695    4370 logs.go:123] Gathering logs for describe nodes ...
	I0917 02:45:13.075700    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0917 02:45:13.108870    4370 logs.go:123] Gathering logs for coredns [66de5532686a] ...
	I0917 02:45:13.108881    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66de5532686a"
	I0917 02:45:13.121282    4370 logs.go:123] Gathering logs for storage-provisioner [82ac45ca132e] ...
	I0917 02:45:13.121297    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ac45ca132e"
	I0917 02:45:13.133633    4370 logs.go:123] Gathering logs for kube-scheduler [3c8c47901c29] ...
	I0917 02:45:13.133644    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c8c47901c29"
	I0917 02:45:13.148758    4370 logs.go:123] Gathering logs for kube-controller-manager [c9ac43bd42f2] ...
	I0917 02:45:13.148768    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9ac43bd42f2"
	I0917 02:45:13.166810    4370 logs.go:123] Gathering logs for coredns [f82f063f81d3] ...
	I0917 02:45:13.166823    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f82f063f81d3"
	I0917 02:45:13.178492    4370 logs.go:123] Gathering logs for coredns [b8158def61e3] ...
	I0917 02:45:13.178503    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8158def61e3"
	I0917 02:45:13.190128    4370 logs.go:123] Gathering logs for kube-proxy [d54edfa778d4] ...
	I0917 02:45:13.190139    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d54edfa778d4"
	I0917 02:45:13.201534    4370 logs.go:123] Gathering logs for Docker ...
	I0917 02:45:13.201543    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0917 02:45:13.225904    4370 logs.go:123] Gathering logs for container status ...
	I0917 02:45:13.225912    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0917 02:45:13.237104    4370 logs.go:123] Gathering logs for kubelet ...
	I0917 02:45:13.237115    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0917 02:45:13.275364    4370 logs.go:123] Gathering logs for kube-apiserver [6383e3c1b923] ...
	I0917 02:45:13.275371    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6383e3c1b923"
	I0917 02:45:13.292815    4370 logs.go:123] Gathering logs for coredns [284709a80c36] ...
	I0917 02:45:13.292825    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 284709a80c36"
	I0917 02:45:13.305097    4370 logs.go:123] Gathering logs for dmesg ...
	I0917 02:45:13.305112    4370 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0917 02:45:13.309470    4370 logs.go:123] Gathering logs for etcd [fdf41b56689f] ...
	I0917 02:45:13.309481    4370 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fdf41b56689f"
	I0917 02:45:15.825016    4370 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0917 02:45:20.827565    4370 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0917 02:45:20.834571    4370 out.go:201] 
	W0917 02:45:20.838580    4370 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0917 02:45:20.838599    4370 out.go:270] * 
	* 
	W0917 02:45:20.840086    4370 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:45:20.848541    4370 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-288000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (583.46s)

                                                
                                    
x
+
TestPause/serial/Start (9.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-506000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-506000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.915161083s)

                                                
                                                
-- stdout --
	* [pause-506000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-506000" primary control-plane node in "pause-506000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-506000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-506000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-506000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-506000 -n pause-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-506000 -n pause-506000: exit status 7 (65.827083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-376000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-376000 --driver=qemu2 : exit status 80 (9.900801083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-376000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-376000" primary control-plane node in "NoKubernetes-376000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-376000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-376000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-376000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-376000 -n NoKubernetes-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-376000 -n NoKubernetes-376000: exit status 7 (57.075333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-376000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-376000 --no-kubernetes --driver=qemu2 : exit status 80 (5.238639375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-376000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-376000
	* Restarting existing qemu2 VM for "NoKubernetes-376000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-376000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-376000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-376000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-376000 -n NoKubernetes-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-376000 -n NoKubernetes-376000: exit status 7 (69.978042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-376000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-376000 --no-kubernetes --driver=qemu2 : exit status 80 (5.232619958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-376000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-376000
	* Restarting existing qemu2 VM for "NoKubernetes-376000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-376000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-376000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-376000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-376000 -n NoKubernetes-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-376000 -n NoKubernetes-376000: exit status 7 (39.84325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-376000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-376000 --driver=qemu2 : exit status 80 (5.273831958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-376000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-376000
	* Restarting existing qemu2 VM for "NoKubernetes-376000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-376000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-376000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-376000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-376000 -n NoKubernetes-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-376000 -n NoKubernetes-376000: exit status 7 (52.671167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.843834666s)

                                                
                                                
-- stdout --
	* [auto-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-688000" primary control-plane node in "auto-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:43:34.273256    4572 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:43:34.273428    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:43:34.273431    4572 out.go:358] Setting ErrFile to fd 2...
	I0917 02:43:34.273433    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:43:34.273581    4572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:43:34.274629    4572 out.go:352] Setting JSON to false
	I0917 02:43:34.291557    4572 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4384,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:43:34.291637    4572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:43:34.295909    4572 out.go:177] * [auto-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:43:34.301940    4572 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:43:34.302032    4572 notify.go:220] Checking for updates...
	I0917 02:43:34.309863    4572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:43:34.313874    4572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:43:34.317919    4572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:43:34.321852    4572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:43:34.325843    4572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:43:34.330155    4572 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:43:34.330223    4572 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:43:34.330270    4572 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:43:34.332956    4572 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:43:34.339811    4572 start.go:297] selected driver: qemu2
	I0917 02:43:34.339817    4572 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:43:34.339823    4572 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:43:34.342151    4572 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:43:34.345808    4572 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:43:34.349993    4572 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:43:34.350017    4572 cni.go:84] Creating CNI manager for ""
	I0917 02:43:34.350048    4572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:43:34.350054    4572 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:43:34.350083    4572 start.go:340] cluster config:
	{Name:auto-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:43:34.353960    4572 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:43:34.361838    4572 out.go:177] * Starting "auto-688000" primary control-plane node in "auto-688000" cluster
	I0917 02:43:34.365824    4572 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:43:34.365840    4572 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:43:34.365848    4572 cache.go:56] Caching tarball of preloaded images
	I0917 02:43:34.365914    4572 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:43:34.365920    4572 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:43:34.365980    4572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/auto-688000/config.json ...
	I0917 02:43:34.365992    4572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/auto-688000/config.json: {Name:mkbdd1d3b06b2400b58c711d56ee65e2e9e5561e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:43:34.366346    4572 start.go:360] acquireMachinesLock for auto-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:43:34.366382    4572 start.go:364] duration metric: took 29.542µs to acquireMachinesLock for "auto-688000"
	I0917 02:43:34.366393    4572 start.go:93] Provisioning new machine with config: &{Name:auto-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:43:34.366425    4572 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:43:34.372823    4572 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:43:34.390835    4572 start.go:159] libmachine.API.Create for "auto-688000" (driver="qemu2")
	I0917 02:43:34.390871    4572 client.go:168] LocalClient.Create starting
	I0917 02:43:34.390934    4572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:43:34.390965    4572 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:34.390989    4572 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:34.391027    4572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:43:34.391051    4572 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:34.391064    4572 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:34.391489    4572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:43:34.549590    4572 main.go:141] libmachine: Creating SSH key...
	I0917 02:43:34.663946    4572 main.go:141] libmachine: Creating Disk image...
	I0917 02:43:34.663953    4572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:43:34.664163    4572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2
	I0917 02:43:34.673719    4572 main.go:141] libmachine: STDOUT: 
	I0917 02:43:34.673741    4572 main.go:141] libmachine: STDERR: 
	I0917 02:43:34.673801    4572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2 +20000M
	I0917 02:43:34.681995    4572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:43:34.682013    4572 main.go:141] libmachine: STDERR: 
	I0917 02:43:34.682031    4572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2
	I0917 02:43:34.682037    4572 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:43:34.682048    4572 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:43:34.682072    4572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:63:02:10:72:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2
	I0917 02:43:34.683890    4572 main.go:141] libmachine: STDOUT: 
	I0917 02:43:34.683904    4572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:43:34.683927    4572 client.go:171] duration metric: took 293.051375ms to LocalClient.Create
	I0917 02:43:36.686121    4572 start.go:128] duration metric: took 2.319678334s to createHost
	I0917 02:43:36.686235    4572 start.go:83] releasing machines lock for "auto-688000", held for 2.319856833s
	W0917 02:43:36.686297    4572 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:43:36.701280    4572 out.go:177] * Deleting "auto-688000" in qemu2 ...
	W0917 02:43:36.726612    4572 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:43:36.726636    4572 start.go:729] Will try again in 5 seconds ...
	I0917 02:43:41.728866    4572 start.go:360] acquireMachinesLock for auto-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:43:41.729434    4572 start.go:364] duration metric: took 410.541µs to acquireMachinesLock for "auto-688000"
	I0917 02:43:41.729529    4572 start.go:93] Provisioning new machine with config: &{Name:auto-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:43:41.729871    4572 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:43:41.739487    4572 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:43:41.787065    4572 start.go:159] libmachine.API.Create for "auto-688000" (driver="qemu2")
	I0917 02:43:41.787121    4572 client.go:168] LocalClient.Create starting
	I0917 02:43:41.787252    4572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:43:41.787321    4572 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:41.787342    4572 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:41.787405    4572 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:43:41.787449    4572 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:41.787463    4572 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:41.788019    4572 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:43:41.955450    4572 main.go:141] libmachine: Creating SSH key...
	I0917 02:43:42.024216    4572 main.go:141] libmachine: Creating Disk image...
	I0917 02:43:42.024223    4572 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:43:42.024437    4572 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2
	I0917 02:43:42.033730    4572 main.go:141] libmachine: STDOUT: 
	I0917 02:43:42.033748    4572 main.go:141] libmachine: STDERR: 
	I0917 02:43:42.033827    4572 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2 +20000M
	I0917 02:43:42.042074    4572 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:43:42.042089    4572 main.go:141] libmachine: STDERR: 
	I0917 02:43:42.042102    4572 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2
	I0917 02:43:42.042107    4572 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:43:42.042115    4572 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:43:42.042149    4572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:7f:fe:73:d9:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/auto-688000/disk.qcow2
	I0917 02:43:42.043885    4572 main.go:141] libmachine: STDOUT: 
	I0917 02:43:42.043901    4572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:43:42.043911    4572 client.go:171] duration metric: took 256.782291ms to LocalClient.Create
	I0917 02:43:44.046044    4572 start.go:128] duration metric: took 2.316115834s to createHost
	I0917 02:43:44.046088    4572 start.go:83] releasing machines lock for "auto-688000", held for 2.316616125s
	W0917 02:43:44.046341    4572 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:43:44.060303    4572 out.go:201] 
	W0917 02:43:44.065407    4572 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:43:44.065420    4572 out.go:270] * 
	* 
	W0917 02:43:44.066772    4572 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:43:44.078391    4572 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.908006958s)

                                                
                                                
-- stdout --
	* [flannel-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-688000" primary control-plane node in "flannel-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:43:46.241786    4681 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:43:46.241919    4681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:43:46.241922    4681 out.go:358] Setting ErrFile to fd 2...
	I0917 02:43:46.241925    4681 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:43:46.242047    4681 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:43:46.243085    4681 out.go:352] Setting JSON to false
	I0917 02:43:46.259232    4681 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4396,"bootTime":1726561830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:43:46.259310    4681 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:43:46.265413    4681 out.go:177] * [flannel-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:43:46.273163    4681 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:43:46.273199    4681 notify.go:220] Checking for updates...
	I0917 02:43:46.281181    4681 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:43:46.284198    4681 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:43:46.287265    4681 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:43:46.290182    4681 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:43:46.293139    4681 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:43:46.296504    4681 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:43:46.296575    4681 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:43:46.296629    4681 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:43:46.300155    4681 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:43:46.307167    4681 start.go:297] selected driver: qemu2
	I0917 02:43:46.307174    4681 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:43:46.307181    4681 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:43:46.309397    4681 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:43:46.312195    4681 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:43:46.315205    4681 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:43:46.315224    4681 cni.go:84] Creating CNI manager for "flannel"
	I0917 02:43:46.315228    4681 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0917 02:43:46.315256    4681 start.go:340] cluster config:
	{Name:flannel-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:43:46.319579    4681 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:43:46.328010    4681 out.go:177] * Starting "flannel-688000" primary control-plane node in "flannel-688000" cluster
	I0917 02:43:46.332192    4681 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:43:46.332223    4681 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:43:46.332228    4681 cache.go:56] Caching tarball of preloaded images
	I0917 02:43:46.332316    4681 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:43:46.332322    4681 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:43:46.332389    4681 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/flannel-688000/config.json ...
	I0917 02:43:46.332399    4681 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/flannel-688000/config.json: {Name:mk9bfe5468557a9671ef27ee37e2a0a784830aae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:43:46.332769    4681 start.go:360] acquireMachinesLock for flannel-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:43:46.332801    4681 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "flannel-688000"
	I0917 02:43:46.332812    4681 start.go:93] Provisioning new machine with config: &{Name:flannel-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:43:46.332847    4681 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:43:46.337026    4681 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:43:46.353753    4681 start.go:159] libmachine.API.Create for "flannel-688000" (driver="qemu2")
	I0917 02:43:46.353789    4681 client.go:168] LocalClient.Create starting
	I0917 02:43:46.353876    4681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:43:46.353909    4681 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:46.353917    4681 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:46.353953    4681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:43:46.353977    4681 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:46.353985    4681 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:46.354387    4681 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:43:46.511911    4681 main.go:141] libmachine: Creating SSH key...
	I0917 02:43:46.698233    4681 main.go:141] libmachine: Creating Disk image...
	I0917 02:43:46.698246    4681 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:43:46.698484    4681 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2
	I0917 02:43:46.708150    4681 main.go:141] libmachine: STDOUT: 
	I0917 02:43:46.708169    4681 main.go:141] libmachine: STDERR: 
	I0917 02:43:46.708244    4681 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2 +20000M
	I0917 02:43:46.716581    4681 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:43:46.716596    4681 main.go:141] libmachine: STDERR: 
	I0917 02:43:46.716614    4681 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2
	I0917 02:43:46.716619    4681 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:43:46.716631    4681 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:43:46.716660    4681 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:df:27:ba:34:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2
	I0917 02:43:46.718359    4681 main.go:141] libmachine: STDOUT: 
	I0917 02:43:46.718373    4681 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:43:46.718391    4681 client.go:171] duration metric: took 364.599042ms to LocalClient.Create
	I0917 02:43:48.720581    4681 start.go:128] duration metric: took 2.387710708s to createHost
	I0917 02:43:48.720681    4681 start.go:83] releasing machines lock for "flannel-688000", held for 2.387886166s
	W0917 02:43:48.720752    4681 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:43:48.727648    4681 out.go:177] * Deleting "flannel-688000" in qemu2 ...
	W0917 02:43:48.755949    4681 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:43:48.755976    4681 start.go:729] Will try again in 5 seconds ...
	I0917 02:43:53.758186    4681 start.go:360] acquireMachinesLock for flannel-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:43:53.758721    4681 start.go:364] duration metric: took 437µs to acquireMachinesLock for "flannel-688000"
	I0917 02:43:53.758878    4681 start.go:93] Provisioning new machine with config: &{Name:flannel-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:43:53.759154    4681 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:43:53.764911    4681 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:43:53.814011    4681 start.go:159] libmachine.API.Create for "flannel-688000" (driver="qemu2")
	I0917 02:43:53.814063    4681 client.go:168] LocalClient.Create starting
	I0917 02:43:53.814182    4681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:43:53.814253    4681 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:53.814272    4681 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:53.814334    4681 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:43:53.814396    4681 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:53.814408    4681 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:53.814952    4681 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:43:53.981116    4681 main.go:141] libmachine: Creating SSH key...
	I0917 02:43:54.059664    4681 main.go:141] libmachine: Creating Disk image...
	I0917 02:43:54.059671    4681 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:43:54.059875    4681 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2
	I0917 02:43:54.069765    4681 main.go:141] libmachine: STDOUT: 
	I0917 02:43:54.069782    4681 main.go:141] libmachine: STDERR: 
	I0917 02:43:54.069854    4681 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2 +20000M
	I0917 02:43:54.078183    4681 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:43:54.078197    4681 main.go:141] libmachine: STDERR: 
	I0917 02:43:54.078220    4681 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2
	I0917 02:43:54.078227    4681 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:43:54.078236    4681 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:43:54.078264    4681 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:8d:e0:72:f3:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/flannel-688000/disk.qcow2
	I0917 02:43:54.080026    4681 main.go:141] libmachine: STDOUT: 
	I0917 02:43:54.080043    4681 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:43:54.080060    4681 client.go:171] duration metric: took 265.993417ms to LocalClient.Create
	I0917 02:43:56.082154    4681 start.go:128] duration metric: took 2.322977s to createHost
	I0917 02:43:56.082234    4681 start.go:83] releasing machines lock for "flannel-688000", held for 2.323506166s
	W0917 02:43:56.082387    4681 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:43:56.093368    4681 out.go:201] 
	W0917 02:43:56.100438    4681 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:43:56.100457    4681 out.go:270] * 
	* 
	W0917 02:43:56.101717    4681 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:43:56.115341    4681 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.031973417s)

                                                
                                                
-- stdout --
	* [kindnet-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-688000" primary control-plane node in "kindnet-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:43:58.452588    4803 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:43:58.452722    4803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:43:58.452725    4803 out.go:358] Setting ErrFile to fd 2...
	I0917 02:43:58.452728    4803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:43:58.452889    4803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:43:58.453998    4803 out.go:352] Setting JSON to false
	I0917 02:43:58.470591    4803 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4408,"bootTime":1726561830,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:43:58.470660    4803 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:43:58.476231    4803 out.go:177] * [kindnet-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:43:58.484124    4803 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:43:58.484192    4803 notify.go:220] Checking for updates...
	I0917 02:43:58.491072    4803 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:43:58.494070    4803 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:43:58.498071    4803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:43:58.501018    4803 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:43:58.504089    4803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:43:58.507476    4803 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:43:58.507542    4803 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:43:58.507587    4803 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:43:58.511022    4803 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:43:58.517968    4803 start.go:297] selected driver: qemu2
	I0917 02:43:58.517984    4803 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:43:58.517993    4803 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:43:58.520259    4803 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:43:58.522982    4803 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:43:58.526166    4803 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:43:58.526199    4803 cni.go:84] Creating CNI manager for "kindnet"
	I0917 02:43:58.526203    4803 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 02:43:58.526241    4803 start.go:340] cluster config:
	{Name:kindnet-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:43:58.529792    4803 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:43:58.537070    4803 out.go:177] * Starting "kindnet-688000" primary control-plane node in "kindnet-688000" cluster
	I0917 02:43:58.540901    4803 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:43:58.540913    4803 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:43:58.540919    4803 cache.go:56] Caching tarball of preloaded images
	I0917 02:43:58.540967    4803 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:43:58.540972    4803 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:43:58.541022    4803 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/kindnet-688000/config.json ...
	I0917 02:43:58.541032    4803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/kindnet-688000/config.json: {Name:mk908d8f45fa15f23889c58c1df88e05fb092643 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:43:58.541279    4803 start.go:360] acquireMachinesLock for kindnet-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:43:58.541310    4803 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "kindnet-688000"
	I0917 02:43:58.541320    4803 start.go:93] Provisioning new machine with config: &{Name:kindnet-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:43:58.541343    4803 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:43:58.548049    4803 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:43:58.563141    4803 start.go:159] libmachine.API.Create for "kindnet-688000" (driver="qemu2")
	I0917 02:43:58.563176    4803 client.go:168] LocalClient.Create starting
	I0917 02:43:58.563246    4803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:43:58.563276    4803 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:58.563285    4803 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:58.563322    4803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:43:58.563346    4803 main.go:141] libmachine: Decoding PEM data...
	I0917 02:43:58.563356    4803 main.go:141] libmachine: Parsing certificate...
	I0917 02:43:58.563709    4803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:43:58.725782    4803 main.go:141] libmachine: Creating SSH key...
	I0917 02:43:58.804421    4803 main.go:141] libmachine: Creating Disk image...
	I0917 02:43:58.804427    4803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:43:58.804623    4803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2
	I0917 02:43:58.814032    4803 main.go:141] libmachine: STDOUT: 
	I0917 02:43:58.814045    4803 main.go:141] libmachine: STDERR: 
	I0917 02:43:58.814099    4803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2 +20000M
	I0917 02:43:58.822211    4803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:43:58.822227    4803 main.go:141] libmachine: STDERR: 
	I0917 02:43:58.822245    4803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2
	I0917 02:43:58.822251    4803 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:43:58.822262    4803 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:43:58.822289    4803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:0f:f6:ed:8a:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2
	I0917 02:43:58.824000    4803 main.go:141] libmachine: STDOUT: 
	I0917 02:43:58.824013    4803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:43:58.824033    4803 client.go:171] duration metric: took 260.852917ms to LocalClient.Create
	I0917 02:44:00.826233    4803 start.go:128] duration metric: took 2.284872333s to createHost
	I0917 02:44:00.826329    4803 start.go:83] releasing machines lock for "kindnet-688000", held for 2.285024708s
	W0917 02:44:00.826392    4803 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:00.837742    4803 out.go:177] * Deleting "kindnet-688000" in qemu2 ...
	W0917 02:44:00.874747    4803 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:00.874777    4803 start.go:729] Will try again in 5 seconds ...
	I0917 02:44:05.876334    4803 start.go:360] acquireMachinesLock for kindnet-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:44:05.876866    4803 start.go:364] duration metric: took 449.25µs to acquireMachinesLock for "kindnet-688000"
	I0917 02:44:05.876973    4803 start.go:93] Provisioning new machine with config: &{Name:kindnet-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:44:05.877296    4803 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:44:05.886934    4803 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:44:05.930666    4803 start.go:159] libmachine.API.Create for "kindnet-688000" (driver="qemu2")
	I0917 02:44:05.930724    4803 client.go:168] LocalClient.Create starting
	I0917 02:44:05.930860    4803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:44:05.930931    4803 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:05.930947    4803 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:05.931022    4803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:44:05.931071    4803 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:05.931093    4803 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:05.931754    4803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:44:06.099441    4803 main.go:141] libmachine: Creating SSH key...
	I0917 02:44:06.391798    4803 main.go:141] libmachine: Creating Disk image...
	I0917 02:44:06.391816    4803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:44:06.392090    4803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2
	I0917 02:44:06.401978    4803 main.go:141] libmachine: STDOUT: 
	I0917 02:44:06.401998    4803 main.go:141] libmachine: STDERR: 
	I0917 02:44:06.402068    4803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2 +20000M
	I0917 02:44:06.410173    4803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:44:06.410187    4803 main.go:141] libmachine: STDERR: 
	I0917 02:44:06.410200    4803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2
	I0917 02:44:06.410206    4803 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:44:06.410214    4803 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:44:06.410264    4803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f6:37:8d:7f:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kindnet-688000/disk.qcow2
	I0917 02:44:06.411951    4803 main.go:141] libmachine: STDOUT: 
	I0917 02:44:06.411964    4803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:44:06.411975    4803 client.go:171] duration metric: took 481.248042ms to LocalClient.Create
	I0917 02:44:08.414277    4803 start.go:128] duration metric: took 2.536955667s to createHost
	I0917 02:44:08.414379    4803 start.go:83] releasing machines lock for "kindnet-688000", held for 2.537482542s
	W0917 02:44:08.414768    4803 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:08.423226    4803 out.go:201] 
	W0917 02:44:08.430422    4803 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:44:08.430466    4803 out.go:270] * 
	* 
	W0917 02:44:08.433205    4803 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:44:08.442226    4803 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.841090916s)

                                                
                                                
-- stdout --
	* [enable-default-cni-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-688000" primary control-plane node in "enable-default-cni-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:44:10.802209    4917 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:44:10.802341    4917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:44:10.802345    4917 out.go:358] Setting ErrFile to fd 2...
	I0917 02:44:10.802348    4917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:44:10.802476    4917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:44:10.803552    4917 out.go:352] Setting JSON to false
	I0917 02:44:10.820032    4917 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4420,"bootTime":1726561830,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:44:10.820112    4917 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:44:10.826646    4917 out.go:177] * [enable-default-cni-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:44:10.833405    4917 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:44:10.833451    4917 notify.go:220] Checking for updates...
	I0917 02:44:10.839355    4917 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:44:10.842385    4917 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:44:10.845432    4917 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:44:10.846836    4917 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:44:10.850378    4917 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:44:10.853822    4917 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:44:10.853889    4917 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:44:10.853931    4917 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:44:10.858253    4917 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:44:10.865407    4917 start.go:297] selected driver: qemu2
	I0917 02:44:10.865414    4917 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:44:10.865421    4917 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:44:10.867765    4917 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:44:10.870409    4917 out.go:177] * Automatically selected the socket_vmnet network
	E0917 02:44:10.873513    4917 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0917 02:44:10.873527    4917 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:44:10.873550    4917 cni.go:84] Creating CNI manager for "bridge"
	I0917 02:44:10.873558    4917 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:44:10.873588    4917 start.go:340] cluster config:
	{Name:enable-default-cni-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:44:10.877373    4917 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:44:10.885370    4917 out.go:177] * Starting "enable-default-cni-688000" primary control-plane node in "enable-default-cni-688000" cluster
	I0917 02:44:10.889425    4917 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:44:10.889443    4917 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:44:10.889459    4917 cache.go:56] Caching tarball of preloaded images
	I0917 02:44:10.889525    4917 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:44:10.889531    4917 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:44:10.889613    4917 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/enable-default-cni-688000/config.json ...
	I0917 02:44:10.889624    4917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/enable-default-cni-688000/config.json: {Name:mk62e8cf48da708ae7bf33e8fdaf93c160822c7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:44:10.889902    4917 start.go:360] acquireMachinesLock for enable-default-cni-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:44:10.889940    4917 start.go:364] duration metric: took 28.958µs to acquireMachinesLock for "enable-default-cni-688000"
	I0917 02:44:10.889952    4917 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:44:10.889980    4917 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:44:10.897375    4917 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:44:10.915209    4917 start.go:159] libmachine.API.Create for "enable-default-cni-688000" (driver="qemu2")
	I0917 02:44:10.915247    4917 client.go:168] LocalClient.Create starting
	I0917 02:44:10.915344    4917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:44:10.915381    4917 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:10.915391    4917 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:10.915437    4917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:44:10.915463    4917 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:10.915473    4917 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:10.915852    4917 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:44:11.071145    4917 main.go:141] libmachine: Creating SSH key...
	I0917 02:44:11.164401    4917 main.go:141] libmachine: Creating Disk image...
	I0917 02:44:11.164408    4917 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:44:11.164604    4917 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2
	I0917 02:44:11.173880    4917 main.go:141] libmachine: STDOUT: 
	I0917 02:44:11.173901    4917 main.go:141] libmachine: STDERR: 
	I0917 02:44:11.173959    4917 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2 +20000M
	I0917 02:44:11.181914    4917 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:44:11.181927    4917 main.go:141] libmachine: STDERR: 
	I0917 02:44:11.181942    4917 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2
	I0917 02:44:11.181951    4917 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:44:11.181963    4917 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:44:11.181992    4917 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:6d:10:95:75:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2
	I0917 02:44:11.183658    4917 main.go:141] libmachine: STDOUT: 
	I0917 02:44:11.183673    4917 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:44:11.183696    4917 client.go:171] duration metric: took 268.441958ms to LocalClient.Create
	I0917 02:44:13.185924    4917 start.go:128] duration metric: took 2.295928292s to createHost
	I0917 02:44:13.186011    4917 start.go:83] releasing machines lock for "enable-default-cni-688000", held for 2.296075792s
	W0917 02:44:13.186065    4917 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:13.204062    4917 out.go:177] * Deleting "enable-default-cni-688000" in qemu2 ...
	W0917 02:44:13.232807    4917 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:13.232842    4917 start.go:729] Will try again in 5 seconds ...
	I0917 02:44:18.234332    4917 start.go:360] acquireMachinesLock for enable-default-cni-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:44:18.234444    4917 start.go:364] duration metric: took 93.208µs to acquireMachinesLock for "enable-default-cni-688000"
	I0917 02:44:18.234458    4917 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:44:18.234522    4917 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:44:18.241512    4917 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:44:18.256944    4917 start.go:159] libmachine.API.Create for "enable-default-cni-688000" (driver="qemu2")
	I0917 02:44:18.256978    4917 client.go:168] LocalClient.Create starting
	I0917 02:44:18.257036    4917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:44:18.257076    4917 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:18.257084    4917 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:18.257118    4917 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:44:18.257141    4917 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:18.257147    4917 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:18.257435    4917 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:44:18.437036    4917 main.go:141] libmachine: Creating SSH key...
	I0917 02:44:18.556089    4917 main.go:141] libmachine: Creating Disk image...
	I0917 02:44:18.556096    4917 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:44:18.556283    4917 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2
	I0917 02:44:18.565830    4917 main.go:141] libmachine: STDOUT: 
	I0917 02:44:18.565853    4917 main.go:141] libmachine: STDERR: 
	I0917 02:44:18.565922    4917 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2 +20000M
	I0917 02:44:18.573809    4917 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:44:18.573832    4917 main.go:141] libmachine: STDERR: 
	I0917 02:44:18.573844    4917 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2
	I0917 02:44:18.573857    4917 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:44:18.573863    4917 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:44:18.573891    4917 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:39:c3:06:c1:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/enable-default-cni-688000/disk.qcow2
	I0917 02:44:18.575535    4917 main.go:141] libmachine: STDOUT: 
	I0917 02:44:18.575549    4917 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:44:18.575561    4917 client.go:171] duration metric: took 318.581292ms to LocalClient.Create
	I0917 02:44:20.577663    4917 start.go:128] duration metric: took 2.343128125s to createHost
	I0917 02:44:20.577724    4917 start.go:83] releasing machines lock for "enable-default-cni-688000", held for 2.343287166s
	W0917 02:44:20.577931    4917 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:20.588430    4917 out.go:201] 
	W0917 02:44:20.594453    4917 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:44:20.594465    4917 out.go:270] * 
	* 
	W0917 02:44:20.595840    4917 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:44:20.606339    4917 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.961613375s)

                                                
                                                
-- stdout --
	* [bridge-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-688000" primary control-plane node in "bridge-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:44:22.827703    5029 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:44:22.827843    5029 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:44:22.827846    5029 out.go:358] Setting ErrFile to fd 2...
	I0917 02:44:22.827849    5029 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:44:22.827969    5029 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:44:22.829103    5029 out.go:352] Setting JSON to false
	I0917 02:44:22.845364    5029 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4432,"bootTime":1726561830,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:44:22.845438    5029 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:44:22.852849    5029 out.go:177] * [bridge-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:44:22.862712    5029 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:44:22.862791    5029 notify.go:220] Checking for updates...
	I0917 02:44:22.870645    5029 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:44:22.873681    5029 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:44:22.879707    5029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:44:22.883749    5029 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:44:22.886653    5029 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:44:22.890106    5029 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:44:22.890170    5029 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:44:22.890219    5029 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:44:22.894656    5029 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:44:22.901683    5029 start.go:297] selected driver: qemu2
	I0917 02:44:22.901688    5029 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:44:22.901693    5029 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:44:22.903850    5029 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:44:22.915312    5029 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:44:22.918756    5029 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:44:22.918775    5029 cni.go:84] Creating CNI manager for "bridge"
	I0917 02:44:22.918779    5029 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:44:22.918810    5029 start.go:340] cluster config:
	{Name:bridge-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:44:22.922439    5029 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:44:22.930664    5029 out.go:177] * Starting "bridge-688000" primary control-plane node in "bridge-688000" cluster
	I0917 02:44:22.934721    5029 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:44:22.934743    5029 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:44:22.934754    5029 cache.go:56] Caching tarball of preloaded images
	I0917 02:44:22.934828    5029 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:44:22.934834    5029 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:44:22.934909    5029 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/bridge-688000/config.json ...
	I0917 02:44:22.934925    5029 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/bridge-688000/config.json: {Name:mk18da967fe4be0fb0350c1897c9cfcf96d7239f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:44:22.935146    5029 start.go:360] acquireMachinesLock for bridge-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:44:22.935180    5029 start.go:364] duration metric: took 28µs to acquireMachinesLock for "bridge-688000"
	I0917 02:44:22.935191    5029 start.go:93] Provisioning new machine with config: &{Name:bridge-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:44:22.935232    5029 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:44:22.943703    5029 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:44:22.960631    5029 start.go:159] libmachine.API.Create for "bridge-688000" (driver="qemu2")
	I0917 02:44:22.960659    5029 client.go:168] LocalClient.Create starting
	I0917 02:44:22.960720    5029 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:44:22.960750    5029 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:22.960759    5029 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:22.960799    5029 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:44:22.960822    5029 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:22.960831    5029 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:22.961242    5029 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:44:23.120171    5029 main.go:141] libmachine: Creating SSH key...
	I0917 02:44:23.210050    5029 main.go:141] libmachine: Creating Disk image...
	I0917 02:44:23.210056    5029 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:44:23.210256    5029 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2
	I0917 02:44:23.219781    5029 main.go:141] libmachine: STDOUT: 
	I0917 02:44:23.219810    5029 main.go:141] libmachine: STDERR: 
	I0917 02:44:23.219871    5029 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2 +20000M
	I0917 02:44:23.227831    5029 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:44:23.227852    5029 main.go:141] libmachine: STDERR: 
	I0917 02:44:23.227872    5029 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2
	I0917 02:44:23.227878    5029 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:44:23.227890    5029 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:44:23.227956    5029 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:27:0e:6b:62:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2
	I0917 02:44:23.229573    5029 main.go:141] libmachine: STDOUT: 
	I0917 02:44:23.229586    5029 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:44:23.229609    5029 client.go:171] duration metric: took 268.94625ms to LocalClient.Create
	I0917 02:44:25.232429    5029 start.go:128] duration metric: took 2.297185625s to createHost
	I0917 02:44:25.232510    5029 start.go:83] releasing machines lock for "bridge-688000", held for 2.297335708s
	W0917 02:44:25.232567    5029 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:25.247811    5029 out.go:177] * Deleting "bridge-688000" in qemu2 ...
	W0917 02:44:25.278006    5029 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:25.278033    5029 start.go:729] Will try again in 5 seconds ...
	I0917 02:44:30.280178    5029 start.go:360] acquireMachinesLock for bridge-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:44:30.280716    5029 start.go:364] duration metric: took 435.833µs to acquireMachinesLock for "bridge-688000"
	I0917 02:44:30.280862    5029 start.go:93] Provisioning new machine with config: &{Name:bridge-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:44:30.281179    5029 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:44:30.289820    5029 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:44:30.341840    5029 start.go:159] libmachine.API.Create for "bridge-688000" (driver="qemu2")
	I0917 02:44:30.341908    5029 client.go:168] LocalClient.Create starting
	I0917 02:44:30.342071    5029 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:44:30.342148    5029 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:30.342163    5029 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:30.342240    5029 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:44:30.342291    5029 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:30.342302    5029 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:30.342867    5029 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:44:30.511885    5029 main.go:141] libmachine: Creating SSH key...
	I0917 02:44:30.693083    5029 main.go:141] libmachine: Creating Disk image...
	I0917 02:44:30.693100    5029 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:44:30.693327    5029 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2
	I0917 02:44:30.703253    5029 main.go:141] libmachine: STDOUT: 
	I0917 02:44:30.703282    5029 main.go:141] libmachine: STDERR: 
	I0917 02:44:30.703350    5029 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2 +20000M
	I0917 02:44:30.711307    5029 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:44:30.711322    5029 main.go:141] libmachine: STDERR: 
	I0917 02:44:30.711334    5029 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2
	I0917 02:44:30.711339    5029 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:44:30.711347    5029 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:44:30.711369    5029 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:01:dd:5c:02:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/bridge-688000/disk.qcow2
	I0917 02:44:30.713036    5029 main.go:141] libmachine: STDOUT: 
	I0917 02:44:30.713054    5029 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:44:30.713066    5029 client.go:171] duration metric: took 371.14775ms to LocalClient.Create
	I0917 02:44:32.715353    5029 start.go:128] duration metric: took 2.434122166s to createHost
	I0917 02:44:32.715434    5029 start.go:83] releasing machines lock for "bridge-688000", held for 2.434710542s
	W0917 02:44:32.715731    5029 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:32.730414    5029 out.go:201] 
	W0917 02:44:32.733403    5029 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:44:32.733535    5029 out.go:270] * 
	* 
	W0917 02:44:32.736385    5029 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:44:32.749353    5029 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.815828s)

                                                
                                                
-- stdout --
	* [kubenet-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-688000" primary control-plane node in "kubenet-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:44:35.012733    5138 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:44:35.012872    5138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:44:35.012875    5138 out.go:358] Setting ErrFile to fd 2...
	I0917 02:44:35.012878    5138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:44:35.013033    5138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:44:35.014135    5138 out.go:352] Setting JSON to false
	I0917 02:44:35.031632    5138 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4445,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:44:35.031734    5138 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:44:35.037189    5138 out.go:177] * [kubenet-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:44:35.045070    5138 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:44:35.045134    5138 notify.go:220] Checking for updates...
	I0917 02:44:35.051987    5138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:44:35.054955    5138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:44:35.058991    5138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:44:35.062065    5138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:44:35.065028    5138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:44:35.068356    5138 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:44:35.068421    5138 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:44:35.068482    5138 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:44:35.073017    5138 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:44:35.079988    5138 start.go:297] selected driver: qemu2
	I0917 02:44:35.079994    5138 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:44:35.080001    5138 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:44:35.082360    5138 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:44:35.086018    5138 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:44:35.090088    5138 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:44:35.090108    5138 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0917 02:44:35.090144    5138 start.go:340] cluster config:
	{Name:kubenet-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:44:35.093547    5138 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:44:35.101873    5138 out.go:177] * Starting "kubenet-688000" primary control-plane node in "kubenet-688000" cluster
	I0917 02:44:35.106061    5138 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:44:35.106077    5138 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:44:35.106086    5138 cache.go:56] Caching tarball of preloaded images
	I0917 02:44:35.106158    5138 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:44:35.106164    5138 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:44:35.106231    5138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/kubenet-688000/config.json ...
	I0917 02:44:35.106247    5138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/kubenet-688000/config.json: {Name:mk850a836c58cacdee77159e99956d846798963d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:44:35.106466    5138 start.go:360] acquireMachinesLock for kubenet-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:44:35.106498    5138 start.go:364] duration metric: took 26.584µs to acquireMachinesLock for "kubenet-688000"
	I0917 02:44:35.106508    5138 start.go:93] Provisioning new machine with config: &{Name:kubenet-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:44:35.106537    5138 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:44:35.110042    5138 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:44:35.125883    5138 start.go:159] libmachine.API.Create for "kubenet-688000" (driver="qemu2")
	I0917 02:44:35.125919    5138 client.go:168] LocalClient.Create starting
	I0917 02:44:35.125985    5138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:44:35.126013    5138 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:35.126028    5138 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:35.126072    5138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:44:35.126095    5138 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:35.126105    5138 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:35.126428    5138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:44:35.285725    5138 main.go:141] libmachine: Creating SSH key...
	I0917 02:44:35.416163    5138 main.go:141] libmachine: Creating Disk image...
	I0917 02:44:35.416172    5138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:44:35.416353    5138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2
	I0917 02:44:35.425450    5138 main.go:141] libmachine: STDOUT: 
	I0917 02:44:35.425466    5138 main.go:141] libmachine: STDERR: 
	I0917 02:44:35.425530    5138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2 +20000M
	I0917 02:44:35.433399    5138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:44:35.433412    5138 main.go:141] libmachine: STDERR: 
	I0917 02:44:35.433438    5138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2
	I0917 02:44:35.433444    5138 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:44:35.433457    5138 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:44:35.433480    5138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:04:bd:14:e9:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2
	I0917 02:44:35.435192    5138 main.go:141] libmachine: STDOUT: 
	I0917 02:44:35.435206    5138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:44:35.435228    5138 client.go:171] duration metric: took 309.305166ms to LocalClient.Create
	I0917 02:44:37.437425    5138 start.go:128] duration metric: took 2.330873125s to createHost
	I0917 02:44:37.437523    5138 start.go:83] releasing machines lock for "kubenet-688000", held for 2.331030833s
	W0917 02:44:37.437584    5138 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:37.457899    5138 out.go:177] * Deleting "kubenet-688000" in qemu2 ...
	W0917 02:44:37.483248    5138 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:37.483273    5138 start.go:729] Will try again in 5 seconds ...
	I0917 02:44:42.485421    5138 start.go:360] acquireMachinesLock for kubenet-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:44:42.485557    5138 start.go:364] duration metric: took 111.708µs to acquireMachinesLock for "kubenet-688000"
	I0917 02:44:42.485576    5138 start.go:93] Provisioning new machine with config: &{Name:kubenet-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:44:42.485614    5138 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:44:42.494148    5138 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:44:42.510662    5138 start.go:159] libmachine.API.Create for "kubenet-688000" (driver="qemu2")
	I0917 02:44:42.510691    5138 client.go:168] LocalClient.Create starting
	I0917 02:44:42.510768    5138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:44:42.510808    5138 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:42.510821    5138 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:42.510866    5138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:44:42.510890    5138 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:42.510901    5138 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:42.511190    5138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:44:42.668114    5138 main.go:141] libmachine: Creating SSH key...
	I0917 02:44:42.741093    5138 main.go:141] libmachine: Creating Disk image...
	I0917 02:44:42.741103    5138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:44:42.741303    5138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2
	I0917 02:44:42.750576    5138 main.go:141] libmachine: STDOUT: 
	I0917 02:44:42.750602    5138 main.go:141] libmachine: STDERR: 
	I0917 02:44:42.750672    5138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2 +20000M
	I0917 02:44:42.758720    5138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:44:42.758739    5138 main.go:141] libmachine: STDERR: 
	I0917 02:44:42.758757    5138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2
	I0917 02:44:42.758761    5138 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:44:42.758771    5138 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:44:42.758805    5138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:56:dd:35:9c:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/kubenet-688000/disk.qcow2
	I0917 02:44:42.760500    5138 main.go:141] libmachine: STDOUT: 
	I0917 02:44:42.760516    5138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:44:42.760530    5138 client.go:171] duration metric: took 249.836333ms to LocalClient.Create
	I0917 02:44:44.762729    5138 start.go:128] duration metric: took 2.277097959s to createHost
	I0917 02:44:44.762826    5138 start.go:83] releasing machines lock for "kubenet-688000", held for 2.277273s
	W0917 02:44:44.763197    5138 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:44.773920    5138 out.go:201] 
	W0917 02:44:44.777881    5138 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:44:44.777895    5138 out.go:270] * 
	* 
	W0917 02:44:44.779436    5138 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:44:44.786824    5138 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.725280625s)

                                                
                                                
-- stdout --
	* [custom-flannel-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-688000" primary control-plane node in "custom-flannel-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:44:47.004188    5247 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:44:47.004314    5247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:44:47.004316    5247 out.go:358] Setting ErrFile to fd 2...
	I0917 02:44:47.004318    5247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:44:47.004437    5247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:44:47.005444    5247 out.go:352] Setting JSON to false
	I0917 02:44:47.021989    5247 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4457,"bootTime":1726561830,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:44:47.022078    5247 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:44:47.032813    5247 out.go:177] * [custom-flannel-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:44:47.036762    5247 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:44:47.036799    5247 notify.go:220] Checking for updates...
	I0917 02:44:47.043124    5247 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:44:47.045787    5247 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:44:47.048812    5247 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:44:47.051761    5247 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:44:47.054782    5247 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:44:47.058060    5247 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:44:47.058123    5247 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:44:47.058168    5247 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:44:47.061700    5247 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:44:47.068802    5247 start.go:297] selected driver: qemu2
	I0917 02:44:47.068809    5247 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:44:47.068816    5247 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:44:47.071004    5247 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:44:47.073749    5247 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:44:47.076748    5247 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:44:47.076774    5247 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0917 02:44:47.076786    5247 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0917 02:44:47.076814    5247 start.go:340] cluster config:
	{Name:custom-flannel-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:44:47.080977    5247 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:44:47.087636    5247 out.go:177] * Starting "custom-flannel-688000" primary control-plane node in "custom-flannel-688000" cluster
	I0917 02:44:47.091742    5247 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:44:47.091790    5247 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:44:47.091797    5247 cache.go:56] Caching tarball of preloaded images
	I0917 02:44:47.091915    5247 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:44:47.091921    5247 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:44:47.091990    5247 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/custom-flannel-688000/config.json ...
	I0917 02:44:47.092002    5247 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/custom-flannel-688000/config.json: {Name:mk712b40ee5beb0f755770dffc6ad3d5607d176b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:44:47.092237    5247 start.go:360] acquireMachinesLock for custom-flannel-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:44:47.092270    5247 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "custom-flannel-688000"
	I0917 02:44:47.092282    5247 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:44:47.092309    5247 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:44:47.100766    5247 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:44:47.116247    5247 start.go:159] libmachine.API.Create for "custom-flannel-688000" (driver="qemu2")
	I0917 02:44:47.116279    5247 client.go:168] LocalClient.Create starting
	I0917 02:44:47.116339    5247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:44:47.116370    5247 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:47.116379    5247 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:47.116420    5247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:44:47.116442    5247 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:47.116449    5247 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:47.116788    5247 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:44:47.275060    5247 main.go:141] libmachine: Creating SSH key...
	I0917 02:44:47.329455    5247 main.go:141] libmachine: Creating Disk image...
	I0917 02:44:47.329461    5247 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:44:47.329657    5247 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2
	I0917 02:44:47.338895    5247 main.go:141] libmachine: STDOUT: 
	I0917 02:44:47.338914    5247 main.go:141] libmachine: STDERR: 
	I0917 02:44:47.338993    5247 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2 +20000M
	I0917 02:44:47.347319    5247 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:44:47.347334    5247 main.go:141] libmachine: STDERR: 
	I0917 02:44:47.347352    5247 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2
	I0917 02:44:47.347359    5247 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:44:47.347370    5247 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:44:47.347394    5247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:56:cb:fb:3f:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2
	I0917 02:44:47.349091    5247 main.go:141] libmachine: STDOUT: 
	I0917 02:44:47.349104    5247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:44:47.349125    5247 client.go:171] duration metric: took 232.841667ms to LocalClient.Create
	I0917 02:44:49.351279    5247 start.go:128] duration metric: took 2.258963167s to createHost
	I0917 02:44:49.351343    5247 start.go:83] releasing machines lock for "custom-flannel-688000", held for 2.259079166s
	W0917 02:44:49.351445    5247 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:49.362098    5247 out.go:177] * Deleting "custom-flannel-688000" in qemu2 ...
	W0917 02:44:49.381115    5247 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:49.381131    5247 start.go:729] Will try again in 5 seconds ...
	I0917 02:44:54.383324    5247 start.go:360] acquireMachinesLock for custom-flannel-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:44:54.383748    5247 start.go:364] duration metric: took 358.375µs to acquireMachinesLock for "custom-flannel-688000"
	I0917 02:44:54.383799    5247 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:44:54.383990    5247 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:44:54.388439    5247 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:44:54.430969    5247 start.go:159] libmachine.API.Create for "custom-flannel-688000" (driver="qemu2")
	I0917 02:44:54.431028    5247 client.go:168] LocalClient.Create starting
	I0917 02:44:54.431138    5247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:44:54.431205    5247 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:54.431222    5247 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:54.431274    5247 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:44:54.431314    5247 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:54.431325    5247 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:54.431914    5247 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:44:54.599722    5247 main.go:141] libmachine: Creating SSH key...
	I0917 02:44:54.638531    5247 main.go:141] libmachine: Creating Disk image...
	I0917 02:44:54.638537    5247 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:44:54.638734    5247 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2
	I0917 02:44:54.648117    5247 main.go:141] libmachine: STDOUT: 
	I0917 02:44:54.648136    5247 main.go:141] libmachine: STDERR: 
	I0917 02:44:54.648188    5247 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2 +20000M
	I0917 02:44:54.656188    5247 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:44:54.656209    5247 main.go:141] libmachine: STDERR: 
	I0917 02:44:54.656222    5247 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2
	I0917 02:44:54.656227    5247 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:44:54.656237    5247 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:44:54.656269    5247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:60:c4:4b:d8:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/custom-flannel-688000/disk.qcow2
	I0917 02:44:54.657996    5247 main.go:141] libmachine: STDOUT: 
	I0917 02:44:54.658017    5247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:44:54.658033    5247 client.go:171] duration metric: took 227.0005ms to LocalClient.Create
	I0917 02:44:56.660253    5247 start.go:128] duration metric: took 2.276245917s to createHost
	I0917 02:44:56.660314    5247 start.go:83] releasing machines lock for "custom-flannel-688000", held for 2.276563875s
	W0917 02:44:56.660593    5247 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:44:56.673870    5247 out.go:201] 
	W0917 02:44:56.678209    5247 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:44:56.678230    5247 out.go:270] * 
	* 
	W0917 02:44:56.680323    5247 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:44:56.689027    5247 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.803417667s)

                                                
                                                
-- stdout --
	* [calico-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-688000" primary control-plane node in "calico-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:44:59.139592    5367 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:44:59.139753    5367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:44:59.139757    5367 out.go:358] Setting ErrFile to fd 2...
	I0917 02:44:59.139759    5367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:44:59.139903    5367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:44:59.141352    5367 out.go:352] Setting JSON to false
	I0917 02:44:59.159309    5367 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4469,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:44:59.159386    5367 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:44:59.164503    5367 out.go:177] * [calico-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:44:59.172581    5367 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:44:59.172592    5367 notify.go:220] Checking for updates...
	I0917 02:44:59.180494    5367 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:44:59.183581    5367 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:44:59.187529    5367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:44:59.190557    5367 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:44:59.193599    5367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:44:59.196841    5367 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:44:59.196904    5367 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:44:59.196958    5367 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:44:59.200493    5367 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:44:59.207505    5367 start.go:297] selected driver: qemu2
	I0917 02:44:59.207511    5367 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:44:59.207516    5367 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:44:59.209759    5367 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:44:59.212534    5367 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:44:59.215574    5367 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:44:59.215588    5367 cni.go:84] Creating CNI manager for "calico"
	I0917 02:44:59.215596    5367 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0917 02:44:59.215628    5367 start.go:340] cluster config:
	{Name:calico-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:44:59.219379    5367 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:44:59.227521    5367 out.go:177] * Starting "calico-688000" primary control-plane node in "calico-688000" cluster
	I0917 02:44:59.231383    5367 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:44:59.231402    5367 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:44:59.231435    5367 cache.go:56] Caching tarball of preloaded images
	I0917 02:44:59.231500    5367 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:44:59.231505    5367 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:44:59.231562    5367 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/calico-688000/config.json ...
	I0917 02:44:59.231573    5367 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/calico-688000/config.json: {Name:mk52235c96e44ace59c7567fab0b1f38c4f4f580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:44:59.231769    5367 start.go:360] acquireMachinesLock for calico-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:44:59.231801    5367 start.go:364] duration metric: took 25.916µs to acquireMachinesLock for "calico-688000"
	I0917 02:44:59.231811    5367 start.go:93] Provisioning new machine with config: &{Name:calico-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:44:59.231832    5367 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:44:59.239582    5367 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:44:59.254753    5367 start.go:159] libmachine.API.Create for "calico-688000" (driver="qemu2")
	I0917 02:44:59.254787    5367 client.go:168] LocalClient.Create starting
	I0917 02:44:59.254845    5367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:44:59.254875    5367 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:59.254884    5367 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:59.254922    5367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:44:59.254945    5367 main.go:141] libmachine: Decoding PEM data...
	I0917 02:44:59.254953    5367 main.go:141] libmachine: Parsing certificate...
	I0917 02:44:59.255360    5367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:44:59.433525    5367 main.go:141] libmachine: Creating SSH key...
	I0917 02:44:59.533941    5367 main.go:141] libmachine: Creating Disk image...
	I0917 02:44:59.533953    5367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:44:59.534166    5367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2
	I0917 02:44:59.543828    5367 main.go:141] libmachine: STDOUT: 
	I0917 02:44:59.543844    5367 main.go:141] libmachine: STDERR: 
	I0917 02:44:59.543925    5367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2 +20000M
	I0917 02:44:59.551830    5367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:44:59.551847    5367 main.go:141] libmachine: STDERR: 
	I0917 02:44:59.551868    5367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2
	I0917 02:44:59.551874    5367 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:44:59.551886    5367 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:44:59.551918    5367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:ea:e2:bf:40:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2
	I0917 02:44:59.553675    5367 main.go:141] libmachine: STDOUT: 
	I0917 02:44:59.553688    5367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:44:59.553718    5367 client.go:171] duration metric: took 298.926791ms to LocalClient.Create
	I0917 02:45:01.555899    5367 start.go:128] duration metric: took 2.324054458s to createHost
	I0917 02:45:01.555951    5367 start.go:83] releasing machines lock for "calico-688000", held for 2.324158083s
	W0917 02:45:01.556011    5367 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:01.569580    5367 out.go:177] * Deleting "calico-688000" in qemu2 ...
	W0917 02:45:01.590345    5367 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:01.590366    5367 start.go:729] Will try again in 5 seconds ...
	I0917 02:45:06.592572    5367 start.go:360] acquireMachinesLock for calico-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:06.593042    5367 start.go:364] duration metric: took 349.5µs to acquireMachinesLock for "calico-688000"
	I0917 02:45:06.593164    5367 start.go:93] Provisioning new machine with config: &{Name:calico-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:45:06.593378    5367 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:45:06.601850    5367 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:45:06.640619    5367 start.go:159] libmachine.API.Create for "calico-688000" (driver="qemu2")
	I0917 02:45:06.640676    5367 client.go:168] LocalClient.Create starting
	I0917 02:45:06.640785    5367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:45:06.640839    5367 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:06.640854    5367 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:06.640905    5367 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:45:06.640950    5367 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:06.640960    5367 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:06.641446    5367 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:45:06.805769    5367 main.go:141] libmachine: Creating SSH key...
	I0917 02:45:06.849777    5367 main.go:141] libmachine: Creating Disk image...
	I0917 02:45:06.849783    5367 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:45:06.849969    5367 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2
	I0917 02:45:06.859204    5367 main.go:141] libmachine: STDOUT: 
	I0917 02:45:06.859223    5367 main.go:141] libmachine: STDERR: 
	I0917 02:45:06.859286    5367 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2 +20000M
	I0917 02:45:06.867570    5367 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:45:06.867588    5367 main.go:141] libmachine: STDERR: 
	I0917 02:45:06.867599    5367 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2
	I0917 02:45:06.867605    5367 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:45:06.867613    5367 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:45:06.867644    5367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:87:78:13:20:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/calico-688000/disk.qcow2
	I0917 02:45:06.869389    5367 main.go:141] libmachine: STDOUT: 
	I0917 02:45:06.869405    5367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:45:06.869417    5367 client.go:171] duration metric: took 228.738583ms to LocalClient.Create
	I0917 02:45:08.871625    5367 start.go:128] duration metric: took 2.278227375s to createHost
	I0917 02:45:08.871706    5367 start.go:83] releasing machines lock for "calico-688000", held for 2.278653917s
	W0917 02:45:08.872073    5367 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:08.881605    5367 out.go:201] 
	W0917 02:45:08.887826    5367 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:45:08.887854    5367 out.go:270] * 
	* 
	W0917 02:45:08.890807    5367 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:45:08.899647    5367 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-688000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.909003584s)

                                                
                                                
-- stdout --
	* [false-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-688000" primary control-plane node in "false-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:45:11.335843    5484 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:45:11.335953    5484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:11.335956    5484 out.go:358] Setting ErrFile to fd 2...
	I0917 02:45:11.335958    5484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:11.336076    5484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:45:11.337112    5484 out.go:352] Setting JSON to false
	I0917 02:45:11.353474    5484 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4481,"bootTime":1726561830,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:45:11.353550    5484 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:45:11.359902    5484 out.go:177] * [false-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:45:11.367606    5484 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:45:11.367668    5484 notify.go:220] Checking for updates...
	I0917 02:45:11.375703    5484 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:45:11.378606    5484 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:45:11.382715    5484 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:45:11.385756    5484 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:45:11.388680    5484 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:45:11.392028    5484 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:45:11.392097    5484 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:45:11.392142    5484 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:45:11.396747    5484 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:45:11.403653    5484 start.go:297] selected driver: qemu2
	I0917 02:45:11.403659    5484 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:45:11.403665    5484 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:45:11.405764    5484 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:45:11.408682    5484 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:45:11.411701    5484 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:45:11.411715    5484 cni.go:84] Creating CNI manager for "false"
	I0917 02:45:11.411741    5484 start.go:340] cluster config:
	{Name:false-688000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:45:11.415245    5484 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:11.423697    5484 out.go:177] * Starting "false-688000" primary control-plane node in "false-688000" cluster
	I0917 02:45:11.427672    5484 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:45:11.427695    5484 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:45:11.427701    5484 cache.go:56] Caching tarball of preloaded images
	I0917 02:45:11.427758    5484 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:45:11.427763    5484 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:45:11.427809    5484 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/false-688000/config.json ...
	I0917 02:45:11.427819    5484 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/false-688000/config.json: {Name:mk662e851324508091e4d1684c105b8b8a93186f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:45:11.428037    5484 start.go:360] acquireMachinesLock for false-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:11.428068    5484 start.go:364] duration metric: took 26.041µs to acquireMachinesLock for "false-688000"
	I0917 02:45:11.428079    5484 start.go:93] Provisioning new machine with config: &{Name:false-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:45:11.428103    5484 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:45:11.433652    5484 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:45:11.448787    5484 start.go:159] libmachine.API.Create for "false-688000" (driver="qemu2")
	I0917 02:45:11.448810    5484 client.go:168] LocalClient.Create starting
	I0917 02:45:11.448867    5484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:45:11.448897    5484 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:11.448906    5484 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:11.448940    5484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:45:11.448967    5484 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:11.448977    5484 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:11.449303    5484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:45:11.608580    5484 main.go:141] libmachine: Creating SSH key...
	I0917 02:45:11.806041    5484 main.go:141] libmachine: Creating Disk image...
	I0917 02:45:11.806051    5484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:45:11.806293    5484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2
	I0917 02:45:11.816105    5484 main.go:141] libmachine: STDOUT: 
	I0917 02:45:11.816127    5484 main.go:141] libmachine: STDERR: 
	I0917 02:45:11.816224    5484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2 +20000M
	I0917 02:45:11.824373    5484 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:45:11.824388    5484 main.go:141] libmachine: STDERR: 
	I0917 02:45:11.824397    5484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2
	I0917 02:45:11.824406    5484 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:45:11.824421    5484 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:45:11.824450    5484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:38:0b:42:a4:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2
	I0917 02:45:11.826144    5484 main.go:141] libmachine: STDOUT: 
	I0917 02:45:11.826160    5484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:45:11.826181    5484 client.go:171] duration metric: took 377.369ms to LocalClient.Create
	I0917 02:45:13.828390    5484 start.go:128] duration metric: took 2.4002875s to createHost
	I0917 02:45:13.828434    5484 start.go:83] releasing machines lock for "false-688000", held for 2.400374s
	W0917 02:45:13.828469    5484 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:13.838698    5484 out.go:177] * Deleting "false-688000" in qemu2 ...
	W0917 02:45:13.865442    5484 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:13.865458    5484 start.go:729] Will try again in 5 seconds ...
	I0917 02:45:18.867632    5484 start.go:360] acquireMachinesLock for false-688000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:18.868092    5484 start.go:364] duration metric: took 382.666µs to acquireMachinesLock for "false-688000"
	I0917 02:45:18.868216    5484 start.go:93] Provisioning new machine with config: &{Name:false-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:45:18.868499    5484 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:45:18.878121    5484 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0917 02:45:18.924554    5484 start.go:159] libmachine.API.Create for "false-688000" (driver="qemu2")
	I0917 02:45:18.924604    5484 client.go:168] LocalClient.Create starting
	I0917 02:45:18.924734    5484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:45:18.924801    5484 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:18.924817    5484 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:18.924886    5484 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:45:18.924930    5484 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:18.924944    5484 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:18.925443    5484 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:45:19.094073    5484 main.go:141] libmachine: Creating SSH key...
	I0917 02:45:19.162753    5484 main.go:141] libmachine: Creating Disk image...
	I0917 02:45:19.162759    5484 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:45:19.162943    5484 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2
	I0917 02:45:19.172515    5484 main.go:141] libmachine: STDOUT: 
	I0917 02:45:19.172531    5484 main.go:141] libmachine: STDERR: 
	I0917 02:45:19.172585    5484 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2 +20000M
	I0917 02:45:19.180546    5484 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:45:19.180561    5484 main.go:141] libmachine: STDERR: 
	I0917 02:45:19.180573    5484 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2
	I0917 02:45:19.180577    5484 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:45:19.180589    5484 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:45:19.180628    5484 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:36:df:8f:90:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/false-688000/disk.qcow2
	I0917 02:45:19.182306    5484 main.go:141] libmachine: STDOUT: 
	I0917 02:45:19.182321    5484 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:45:19.182334    5484 client.go:171] duration metric: took 257.727166ms to LocalClient.Create
	I0917 02:45:21.184408    5484 start.go:128] duration metric: took 2.31588625s to createHost
	I0917 02:45:21.184424    5484 start.go:83] releasing machines lock for "false-688000", held for 2.316331625s
	W0917 02:45:21.184512    5484 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:21.192065    5484 out.go:201] 
	W0917 02:45:21.196144    5484 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:45:21.196150    5484 out.go:270] * 
	* 
	W0917 02:45:21.196714    5484 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:45:21.208129    5484 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-336000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-336000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.870379834s)

                                                
                                                
-- stdout --
	* [old-k8s-version-336000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-336000" primary control-plane node in "old-k8s-version-336000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-336000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:45:23.452916    5597 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:45:23.453047    5597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:23.453051    5597 out.go:358] Setting ErrFile to fd 2...
	I0917 02:45:23.453053    5597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:23.453214    5597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:45:23.454277    5597 out.go:352] Setting JSON to false
	I0917 02:45:23.470589    5597 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4493,"bootTime":1726561830,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:45:23.470661    5597 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:45:23.476766    5597 out.go:177] * [old-k8s-version-336000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:45:23.484522    5597 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:45:23.484568    5597 notify.go:220] Checking for updates...
	I0917 02:45:23.492332    5597 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:45:23.495440    5597 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:45:23.498518    5597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:45:23.500020    5597 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:45:23.503450    5597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:45:23.506829    5597 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:45:23.506900    5597 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:45:23.506944    5597 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:45:23.511299    5597 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:45:23.518408    5597 start.go:297] selected driver: qemu2
	I0917 02:45:23.518413    5597 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:45:23.518418    5597 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:45:23.520618    5597 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:45:23.524285    5597 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:45:23.527506    5597 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:45:23.527523    5597 cni.go:84] Creating CNI manager for ""
	I0917 02:45:23.527543    5597 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 02:45:23.527568    5597 start.go:340] cluster config:
	{Name:old-k8s-version-336000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:45:23.531062    5597 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:23.539375    5597 out.go:177] * Starting "old-k8s-version-336000" primary control-plane node in "old-k8s-version-336000" cluster
	I0917 02:45:23.543429    5597 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 02:45:23.543441    5597 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 02:45:23.543446    5597 cache.go:56] Caching tarball of preloaded images
	I0917 02:45:23.543498    5597 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:45:23.543503    5597 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 02:45:23.543552    5597 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/old-k8s-version-336000/config.json ...
	I0917 02:45:23.543561    5597 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/old-k8s-version-336000/config.json: {Name:mk041b19b84140d5ed9b8e2bba38b673c7606264 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:45:23.543890    5597 start.go:360] acquireMachinesLock for old-k8s-version-336000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:23.543921    5597 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "old-k8s-version-336000"
	I0917 02:45:23.543931    5597 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:45:23.543961    5597 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:45:23.551461    5597 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:45:23.566894    5597 start.go:159] libmachine.API.Create for "old-k8s-version-336000" (driver="qemu2")
	I0917 02:45:23.566922    5597 client.go:168] LocalClient.Create starting
	I0917 02:45:23.566978    5597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:45:23.567014    5597 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:23.567022    5597 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:23.567062    5597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:45:23.567085    5597 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:23.567094    5597 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:23.567426    5597 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:45:23.722963    5597 main.go:141] libmachine: Creating SSH key...
	I0917 02:45:23.834750    5597 main.go:141] libmachine: Creating Disk image...
	I0917 02:45:23.834757    5597 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:45:23.834953    5597 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2
	I0917 02:45:23.844368    5597 main.go:141] libmachine: STDOUT: 
	I0917 02:45:23.844390    5597 main.go:141] libmachine: STDERR: 
	I0917 02:45:23.844448    5597 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2 +20000M
	I0917 02:45:23.852228    5597 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:45:23.852246    5597 main.go:141] libmachine: STDERR: 
	I0917 02:45:23.852260    5597 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2
	I0917 02:45:23.852264    5597 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:45:23.852276    5597 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:45:23.852315    5597 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:2a:56:98:3d:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2
	I0917 02:45:23.853966    5597 main.go:141] libmachine: STDOUT: 
	I0917 02:45:23.853986    5597 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:45:23.854005    5597 client.go:171] duration metric: took 287.079375ms to LocalClient.Create
	I0917 02:45:25.856103    5597 start.go:128] duration metric: took 2.312143583s to createHost
	I0917 02:45:25.856136    5597 start.go:83] releasing machines lock for "old-k8s-version-336000", held for 2.312224625s
	W0917 02:45:25.856195    5597 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:25.872926    5597 out.go:177] * Deleting "old-k8s-version-336000" in qemu2 ...
	W0917 02:45:25.887411    5597 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:25.887420    5597 start.go:729] Will try again in 5 seconds ...
	I0917 02:45:30.889716    5597 start.go:360] acquireMachinesLock for old-k8s-version-336000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:30.890186    5597 start.go:364] duration metric: took 352.958µs to acquireMachinesLock for "old-k8s-version-336000"
	I0917 02:45:30.890314    5597 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:45:30.890500    5597 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:45:30.899453    5597 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:45:30.940136    5597 start.go:159] libmachine.API.Create for "old-k8s-version-336000" (driver="qemu2")
	I0917 02:45:30.940191    5597 client.go:168] LocalClient.Create starting
	I0917 02:45:30.940296    5597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:45:30.940349    5597 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:30.940368    5597 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:30.940431    5597 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:45:30.940471    5597 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:30.940486    5597 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:30.941014    5597 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:45:31.106227    5597 main.go:141] libmachine: Creating SSH key...
	I0917 02:45:31.233292    5597 main.go:141] libmachine: Creating Disk image...
	I0917 02:45:31.233300    5597 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:45:31.233498    5597 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2
	I0917 02:45:31.242795    5597 main.go:141] libmachine: STDOUT: 
	I0917 02:45:31.242822    5597 main.go:141] libmachine: STDERR: 
	I0917 02:45:31.242882    5597 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2 +20000M
	I0917 02:45:31.250578    5597 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:45:31.250600    5597 main.go:141] libmachine: STDERR: 
	I0917 02:45:31.250611    5597 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2
	I0917 02:45:31.250616    5597 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:45:31.250625    5597 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:45:31.250653    5597 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:93:b0:8f:9b:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2
	I0917 02:45:31.252214    5597 main.go:141] libmachine: STDOUT: 
	I0917 02:45:31.252234    5597 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:45:31.252247    5597 client.go:171] duration metric: took 312.053583ms to LocalClient.Create
	I0917 02:45:33.254457    5597 start.go:128] duration metric: took 2.363936875s to createHost
	I0917 02:45:33.254540    5597 start.go:83] releasing machines lock for "old-k8s-version-336000", held for 2.36434325s
	W0917 02:45:33.254954    5597 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-336000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:33.264801    5597 out.go:201] 
	W0917 02:45:33.271823    5597 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:45:33.271856    5597 out.go:270] * 
	* 
	W0917 02:45:33.274810    5597 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:45:33.284810    5597 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-336000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000: exit status 7 (53.756875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-336000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-336000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-336000 create -f testdata/busybox.yaml: exit status 1 (28.947834ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-336000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-336000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000: exit status 7 (30.158083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-336000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000: exit status 7 (29.623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-336000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-336000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-336000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-336000 describe deploy/metrics-server -n kube-system: exit status 1 (27.260833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-336000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-336000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000: exit status 7 (30.274958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-336000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-336000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-336000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.189048167s)

                                                
                                                
-- stdout --
	* [old-k8s-version-336000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-336000" primary control-plane node in "old-k8s-version-336000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-336000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-336000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:45:36.984711    5646 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:45:36.984850    5646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:36.984858    5646 out.go:358] Setting ErrFile to fd 2...
	I0917 02:45:36.984860    5646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:36.985012    5646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:45:36.986076    5646 out.go:352] Setting JSON to false
	I0917 02:45:37.002417    5646 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4506,"bootTime":1726561830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:45:37.002488    5646 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:45:37.007684    5646 out.go:177] * [old-k8s-version-336000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:45:37.015626    5646 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:45:37.015657    5646 notify.go:220] Checking for updates...
	I0917 02:45:37.021624    5646 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:45:37.025607    5646 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:45:37.028573    5646 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:45:37.031560    5646 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:45:37.034595    5646 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:45:37.036214    5646 config.go:182] Loaded profile config "old-k8s-version-336000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0917 02:45:37.039541    5646 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0917 02:45:37.042591    5646 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:45:37.046405    5646 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:45:37.053604    5646 start.go:297] selected driver: qemu2
	I0917 02:45:37.053610    5646 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-336000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:45:37.053652    5646 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:45:37.055964    5646 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:45:37.055986    5646 cni.go:84] Creating CNI manager for ""
	I0917 02:45:37.056015    5646 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 02:45:37.056033    5646 start.go:340] cluster config:
	{Name:old-k8s-version-336000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-336000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:45:37.059427    5646 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:37.067553    5646 out.go:177] * Starting "old-k8s-version-336000" primary control-plane node in "old-k8s-version-336000" cluster
	I0917 02:45:37.071438    5646 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 02:45:37.071452    5646 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 02:45:37.071460    5646 cache.go:56] Caching tarball of preloaded images
	I0917 02:45:37.071516    5646 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:45:37.071521    5646 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 02:45:37.071568    5646 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/old-k8s-version-336000/config.json ...
	I0917 02:45:37.071966    5646 start.go:360] acquireMachinesLock for old-k8s-version-336000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:37.071993    5646 start.go:364] duration metric: took 21.916µs to acquireMachinesLock for "old-k8s-version-336000"
	I0917 02:45:37.072001    5646 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:45:37.072008    5646 fix.go:54] fixHost starting: 
	I0917 02:45:37.072117    5646 fix.go:112] recreateIfNeeded on old-k8s-version-336000: state=Stopped err=<nil>
	W0917 02:45:37.072125    5646 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:45:37.076638    5646 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-336000" ...
	I0917 02:45:37.084582    5646 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:45:37.084611    5646 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:93:b0:8f:9b:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2
	I0917 02:45:37.086524    5646 main.go:141] libmachine: STDOUT: 
	I0917 02:45:37.086541    5646 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:45:37.086571    5646 fix.go:56] duration metric: took 14.564584ms for fixHost
	I0917 02:45:37.086576    5646 start.go:83] releasing machines lock for "old-k8s-version-336000", held for 14.578833ms
	W0917 02:45:37.086581    5646 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:45:37.086614    5646 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:37.086618    5646 start.go:729] Will try again in 5 seconds ...
	I0917 02:45:42.088233    5646 start.go:360] acquireMachinesLock for old-k8s-version-336000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:42.088697    5646 start.go:364] duration metric: took 365.5µs to acquireMachinesLock for "old-k8s-version-336000"
	I0917 02:45:42.088836    5646 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:45:42.088853    5646 fix.go:54] fixHost starting: 
	I0917 02:45:42.089465    5646 fix.go:112] recreateIfNeeded on old-k8s-version-336000: state=Stopped err=<nil>
	W0917 02:45:42.089484    5646 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:45:42.097848    5646 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-336000" ...
	I0917 02:45:42.101784    5646 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:45:42.101986    5646 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:93:b0:8f:9b:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/old-k8s-version-336000/disk.qcow2
	I0917 02:45:42.110807    5646 main.go:141] libmachine: STDOUT: 
	I0917 02:45:42.110886    5646 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:45:42.110990    5646 fix.go:56] duration metric: took 22.135958ms for fixHost
	I0917 02:45:42.111013    5646 start.go:83] releasing machines lock for "old-k8s-version-336000", held for 22.29625ms
	W0917 02:45:42.111184    5646 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-336000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-336000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:42.119812    5646 out.go:201] 
	W0917 02:45:42.123816    5646 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:45:42.123843    5646 out.go:270] * 
	* 
	W0917 02:45:42.125920    5646 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:45:42.134834    5646 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-336000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000: exit status 7 (54.64725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-336000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-336000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000: exit status 7 (31.451542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-336000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-336000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-336000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-336000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.968375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-336000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-336000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000: exit status 7 (28.872666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-336000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-336000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000: exit status 7 (29.355083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-336000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-336000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-336000 --alsologtostderr -v=1: exit status 83 (40.3825ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-336000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-336000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:45:42.388060    5672 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:45:42.389085    5672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:42.389093    5672 out.go:358] Setting ErrFile to fd 2...
	I0917 02:45:42.389096    5672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:42.389290    5672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:45:42.389487    5672 out.go:352] Setting JSON to false
	I0917 02:45:42.389496    5672 mustload.go:65] Loading cluster: old-k8s-version-336000
	I0917 02:45:42.389729    5672 config.go:182] Loaded profile config "old-k8s-version-336000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0917 02:45:42.394374    5672 out.go:177] * The control-plane node old-k8s-version-336000 host is not running: state=Stopped
	I0917 02:45:42.397295    5672 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-336000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-336000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000: exit status 7 (29.746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-336000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000: exit status 7 (28.908709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-336000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-105000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-105000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.784714459s)

                                                
                                                
-- stdout --
	* [no-preload-105000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-105000" primary control-plane node in "no-preload-105000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-105000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:45:42.709769    5689 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:45:42.709922    5689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:42.709926    5689 out.go:358] Setting ErrFile to fd 2...
	I0917 02:45:42.709928    5689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:42.710050    5689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:45:42.711109    5689 out.go:352] Setting JSON to false
	I0917 02:45:42.727632    5689 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4512,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:45:42.727706    5689 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:45:42.731224    5689 out.go:177] * [no-preload-105000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:45:42.735991    5689 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:45:42.736074    5689 notify.go:220] Checking for updates...
	I0917 02:45:42.743172    5689 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:45:42.744534    5689 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:45:42.747133    5689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:45:42.750220    5689 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:45:42.753202    5689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:45:42.756544    5689 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:45:42.756604    5689 config.go:182] Loaded profile config "stopped-upgrade-288000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0917 02:45:42.756647    5689 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:45:42.761194    5689 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:45:42.768161    5689 start.go:297] selected driver: qemu2
	I0917 02:45:42.768167    5689 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:45:42.768172    5689 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:45:42.770412    5689 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:45:42.773203    5689 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:45:42.776246    5689 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:45:42.776269    5689 cni.go:84] Creating CNI manager for ""
	I0917 02:45:42.776296    5689 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:45:42.776304    5689 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:45:42.776343    5689 start.go:340] cluster config:
	{Name:no-preload-105000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:45:42.779961    5689 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:42.787157    5689 out.go:177] * Starting "no-preload-105000" primary control-plane node in "no-preload-105000" cluster
	I0917 02:45:42.790065    5689 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:45:42.790147    5689 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/no-preload-105000/config.json ...
	I0917 02:45:42.790168    5689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/no-preload-105000/config.json: {Name:mkad8af60282176f1e18e1671ade02c2db0b51a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:45:42.790170    5689 cache.go:107] acquiring lock: {Name:mkf6d3b5ad97f9f93f3533d57fe6c066351e6c41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:42.790179    5689 cache.go:107] acquiring lock: {Name:mkab1e37cbc263e4ad02c96576bb0c71290ec7b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:42.790191    5689 cache.go:107] acquiring lock: {Name:mke98d0e041a764dd48c719b95ac989a1dbbbc1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:42.790243    5689 cache.go:115] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 02:45:42.790251    5689 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 73.25µs
	I0917 02:45:42.790257    5689 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 02:45:42.790262    5689 cache.go:107] acquiring lock: {Name:mkd970860dc0e1ae13ee444896f1832a79741e80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:42.790318    5689 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 02:45:42.790353    5689 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 02:45:42.790344    5689 cache.go:107] acquiring lock: {Name:mk095b44492f1a5f521819d273dbcfa74241507f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:42.790451    5689 cache.go:107] acquiring lock: {Name:mk118a0ddf3a7ed975f8d05f0310eeac63d89a71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:42.790477    5689 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 02:45:42.790458    5689 cache.go:107] acquiring lock: {Name:mk26701528717324261fe270bfdf520dcb77e38b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:42.790504    5689 start.go:360] acquireMachinesLock for no-preload-105000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:42.790527    5689 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0917 02:45:42.790540    5689 start.go:364] duration metric: took 28.25µs to acquireMachinesLock for "no-preload-105000"
	I0917 02:45:42.790561    5689 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 02:45:42.790551    5689 start.go:93] Provisioning new machine with config: &{Name:no-preload-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:45:42.790577    5689 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:45:42.790617    5689 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0917 02:45:42.790599    5689 cache.go:107] acquiring lock: {Name:mk18d0819e0ba5c54d0eb2941c0ca175c1a6f940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:42.790981    5689 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 02:45:42.794222    5689 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:45:42.801644    5689 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0917 02:45:42.801693    5689 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 02:45:42.801712    5689 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0917 02:45:42.801715    5689 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0917 02:45:42.801690    5689 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0917 02:45:42.801742    5689 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0917 02:45:42.801696    5689 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0917 02:45:42.810239    5689 start.go:159] libmachine.API.Create for "no-preload-105000" (driver="qemu2")
	I0917 02:45:42.810270    5689 client.go:168] LocalClient.Create starting
	I0917 02:45:42.810348    5689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:45:42.810379    5689 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:42.810389    5689 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:42.810430    5689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:45:42.810455    5689 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:42.810465    5689 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:42.810809    5689 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:45:42.976364    5689 main.go:141] libmachine: Creating SSH key...
	I0917 02:45:43.102164    5689 main.go:141] libmachine: Creating Disk image...
	I0917 02:45:43.102183    5689 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:45:43.102609    5689 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2
	I0917 02:45:43.111924    5689 main.go:141] libmachine: STDOUT: 
	I0917 02:45:43.111945    5689 main.go:141] libmachine: STDERR: 
	I0917 02:45:43.112006    5689 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2 +20000M
	I0917 02:45:43.120580    5689 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:45:43.120595    5689 main.go:141] libmachine: STDERR: 
	I0917 02:45:43.120616    5689 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2
	I0917 02:45:43.120626    5689 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:45:43.120636    5689 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:45:43.120661    5689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:93:0e:46:a3:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2
	I0917 02:45:43.122630    5689 main.go:141] libmachine: STDOUT: 
	I0917 02:45:43.122642    5689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:45:43.122660    5689 client.go:171] duration metric: took 312.386917ms to LocalClient.Create
	I0917 02:45:43.202535    5689 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0917 02:45:43.206186    5689 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0917 02:45:43.213171    5689 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0917 02:45:43.214738    5689 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0917 02:45:43.241548    5689 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0917 02:45:43.247355    5689 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0917 02:45:43.279289    5689 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0917 02:45:43.363427    5689 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0917 02:45:43.363442    5689 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 573.038583ms
	I0917 02:45:43.363453    5689 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0917 02:45:45.122839    5689 start.go:128] duration metric: took 2.332242959s to createHost
	I0917 02:45:45.122868    5689 start.go:83] releasing machines lock for "no-preload-105000", held for 2.332338541s
	W0917 02:45:45.122885    5689 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:45.131453    5689 out.go:177] * Deleting "no-preload-105000" in qemu2 ...
	W0917 02:45:45.143200    5689 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:45.143209    5689 start.go:729] Will try again in 5 seconds ...
	I0917 02:45:45.524641    5689 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0917 02:45:45.524657    5689 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.734288625s
	I0917 02:45:45.524666    5689 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0917 02:45:46.036531    5689 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0917 02:45:46.036574    5689 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.246413458s
	I0917 02:45:46.036585    5689 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0917 02:45:46.143386    5689 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0917 02:45:46.143399    5689 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 3.353015708s
	I0917 02:45:46.143405    5689 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0917 02:45:46.359997    5689 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0917 02:45:46.360027    5689 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 3.569787s
	I0917 02:45:46.360037    5689 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0917 02:45:47.134344    5689 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0917 02:45:47.134376    5689 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.34424275s
	I0917 02:45:47.134389    5689 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0917 02:45:49.667369    5689 cache.go:157] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0917 02:45:49.667397    5689 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 6.877120292s
	I0917 02:45:49.667412    5689 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0917 02:45:49.667451    5689 cache.go:87] Successfully saved all images to host disk.
	I0917 02:45:50.145280    5689 start.go:360] acquireMachinesLock for no-preload-105000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:50.145444    5689 start.go:364] duration metric: took 139.417µs to acquireMachinesLock for "no-preload-105000"
	I0917 02:45:50.145495    5689 start.go:93] Provisioning new machine with config: &{Name:no-preload-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:45:50.145553    5689 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:45:50.155848    5689 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:45:50.174059    5689 start.go:159] libmachine.API.Create for "no-preload-105000" (driver="qemu2")
	I0917 02:45:50.174110    5689 client.go:168] LocalClient.Create starting
	I0917 02:45:50.174186    5689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:45:50.174231    5689 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:50.174242    5689 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:50.174285    5689 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:45:50.174308    5689 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:50.174314    5689 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:50.174606    5689 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:45:50.336734    5689 main.go:141] libmachine: Creating SSH key...
	I0917 02:45:50.395841    5689 main.go:141] libmachine: Creating Disk image...
	I0917 02:45:50.395847    5689 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:45:50.396028    5689 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2
	I0917 02:45:50.405337    5689 main.go:141] libmachine: STDOUT: 
	I0917 02:45:50.405360    5689 main.go:141] libmachine: STDERR: 
	I0917 02:45:50.405426    5689 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2 +20000M
	I0917 02:45:50.413425    5689 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:45:50.413441    5689 main.go:141] libmachine: STDERR: 
	I0917 02:45:50.413452    5689 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2
	I0917 02:45:50.413456    5689 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:45:50.413465    5689 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:45:50.413499    5689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:d1:cd:38:30:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2
	I0917 02:45:50.415232    5689 main.go:141] libmachine: STDOUT: 
	I0917 02:45:50.415247    5689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:45:50.415260    5689 client.go:171] duration metric: took 241.147375ms to LocalClient.Create
	I0917 02:45:52.417504    5689 start.go:128] duration metric: took 2.271933958s to createHost
	I0917 02:45:52.417636    5689 start.go:83] releasing machines lock for "no-preload-105000", held for 2.272191334s
	W0917 02:45:52.418001    5689 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-105000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-105000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:52.433707    5689 out.go:201] 
	W0917 02:45:52.437836    5689 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:45:52.437860    5689 out.go:270] * 
	* 
	W0917 02:45:52.440148    5689 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:45:52.451612    5689 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-105000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000: exit status 7 (65.241292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-105000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-105000 create -f testdata/busybox.yaml: exit status 1 (30.670875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-105000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-105000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000: exit status 7 (30.786833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-105000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000: exit status 7 (29.994833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-105000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-105000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-105000 describe deploy/metrics-server -n kube-system: exit status 1 (27.497708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-105000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-105000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000: exit status 7 (29.967667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.979198959s)

                                                
                                                
-- stdout --
	* [embed-certs-347000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-347000" primary control-plane node in "embed-certs-347000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-347000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:45:53.942739    5754 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:45:53.942874    5754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:53.942876    5754 out.go:358] Setting ErrFile to fd 2...
	I0917 02:45:53.942879    5754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:53.943001    5754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:45:53.944052    5754 out.go:352] Setting JSON to false
	I0917 02:45:53.960664    5754 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4523,"bootTime":1726561830,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:45:53.960727    5754 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:45:53.966128    5754 out.go:177] * [embed-certs-347000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:45:53.974103    5754 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:45:53.974175    5754 notify.go:220] Checking for updates...
	I0917 02:45:53.981089    5754 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:45:53.984051    5754 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:45:53.987081    5754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:45:53.990065    5754 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:45:53.992983    5754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:45:53.996395    5754 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:45:53.996464    5754 config.go:182] Loaded profile config "no-preload-105000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:45:53.996509    5754 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:45:54.001044    5754 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:45:54.008052    5754 start.go:297] selected driver: qemu2
	I0917 02:45:54.008058    5754 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:45:54.008064    5754 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:45:54.010375    5754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:45:54.013066    5754 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:45:54.014683    5754 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:45:54.014717    5754 cni.go:84] Creating CNI manager for ""
	I0917 02:45:54.014740    5754 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:45:54.014748    5754 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:45:54.014781    5754 start.go:340] cluster config:
	{Name:embed-certs-347000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:45:54.018556    5754 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:54.026106    5754 out.go:177] * Starting "embed-certs-347000" primary control-plane node in "embed-certs-347000" cluster
	I0917 02:45:54.030034    5754 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:45:54.030055    5754 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:45:54.030064    5754 cache.go:56] Caching tarball of preloaded images
	I0917 02:45:54.030129    5754 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:45:54.030136    5754 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:45:54.030232    5754 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/embed-certs-347000/config.json ...
	I0917 02:45:54.030244    5754 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/embed-certs-347000/config.json: {Name:mk542c441ded8d302d5406afdd6fe5282cf257b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:45:54.030473    5754 start.go:360] acquireMachinesLock for embed-certs-347000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:54.030510    5754 start.go:364] duration metric: took 31.083µs to acquireMachinesLock for "embed-certs-347000"
	I0917 02:45:54.030523    5754 start.go:93] Provisioning new machine with config: &{Name:embed-certs-347000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:45:54.030565    5754 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:45:54.039083    5754 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:45:54.057409    5754 start.go:159] libmachine.API.Create for "embed-certs-347000" (driver="qemu2")
	I0917 02:45:54.057436    5754 client.go:168] LocalClient.Create starting
	I0917 02:45:54.057498    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:45:54.057530    5754 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:54.057540    5754 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:54.057581    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:45:54.057605    5754 main.go:141] libmachine: Decoding PEM data...
	I0917 02:45:54.057613    5754 main.go:141] libmachine: Parsing certificate...
	I0917 02:45:54.057980    5754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:45:54.271132    5754 main.go:141] libmachine: Creating SSH key...
	I0917 02:45:54.400001    5754 main.go:141] libmachine: Creating Disk image...
	I0917 02:45:54.400007    5754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:45:54.400182    5754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2
	I0917 02:45:54.409265    5754 main.go:141] libmachine: STDOUT: 
	I0917 02:45:54.409285    5754 main.go:141] libmachine: STDERR: 
	I0917 02:45:54.409341    5754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2 +20000M
	I0917 02:45:54.417151    5754 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:45:54.417165    5754 main.go:141] libmachine: STDERR: 
	I0917 02:45:54.417184    5754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2
	I0917 02:45:54.417194    5754 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:45:54.417208    5754 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:45:54.417232    5754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:45:62:76:0c:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2
	I0917 02:45:54.418873    5754 main.go:141] libmachine: STDOUT: 
	I0917 02:45:54.418892    5754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:45:54.418914    5754 client.go:171] duration metric: took 361.473167ms to LocalClient.Create
	I0917 02:45:56.421099    5754 start.go:128] duration metric: took 2.390525542s to createHost
	I0917 02:45:56.421196    5754 start.go:83] releasing machines lock for "embed-certs-347000", held for 2.390691166s
	W0917 02:45:56.421255    5754 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:56.436781    5754 out.go:177] * Deleting "embed-certs-347000" in qemu2 ...
	W0917 02:45:56.471167    5754 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:56.471191    5754 start.go:729] Will try again in 5 seconds ...
	I0917 02:46:01.473449    5754 start.go:360] acquireMachinesLock for embed-certs-347000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:01.477974    5754 start.go:364] duration metric: took 4.444375ms to acquireMachinesLock for "embed-certs-347000"
	I0917 02:46:01.478026    5754 start.go:93] Provisioning new machine with config: &{Name:embed-certs-347000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:46:01.478237    5754 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:46:01.490875    5754 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:46:01.536548    5754 start.go:159] libmachine.API.Create for "embed-certs-347000" (driver="qemu2")
	I0917 02:46:01.536600    5754 client.go:168] LocalClient.Create starting
	I0917 02:46:01.536707    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:46:01.536769    5754 main.go:141] libmachine: Decoding PEM data...
	I0917 02:46:01.536789    5754 main.go:141] libmachine: Parsing certificate...
	I0917 02:46:01.536873    5754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:46:01.536918    5754 main.go:141] libmachine: Decoding PEM data...
	I0917 02:46:01.536940    5754 main.go:141] libmachine: Parsing certificate...
	I0917 02:46:01.537451    5754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:46:01.707336    5754 main.go:141] libmachine: Creating SSH key...
	I0917 02:46:01.831432    5754 main.go:141] libmachine: Creating Disk image...
	I0917 02:46:01.831446    5754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:46:01.831687    5754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2
	I0917 02:46:01.841892    5754 main.go:141] libmachine: STDOUT: 
	I0917 02:46:01.841913    5754 main.go:141] libmachine: STDERR: 
	I0917 02:46:01.841985    5754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2 +20000M
	I0917 02:46:01.850794    5754 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:46:01.850814    5754 main.go:141] libmachine: STDERR: 
	I0917 02:46:01.850832    5754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2
	I0917 02:46:01.850837    5754 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:46:01.850847    5754 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:01.850879    5754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:14:6b:34:1e:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2
	I0917 02:46:01.852657    5754 main.go:141] libmachine: STDOUT: 
	I0917 02:46:01.852673    5754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:01.852686    5754 client.go:171] duration metric: took 316.08325ms to LocalClient.Create
	I0917 02:46:03.854888    5754 start.go:128] duration metric: took 2.376627833s to createHost
	I0917 02:46:03.854967    5754 start.go:83] releasing machines lock for "embed-certs-347000", held for 2.37698575s
	W0917 02:46:03.855312    5754 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-347000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-347000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:03.866793    5754 out.go:201] 
	W0917 02:46:03.870823    5754 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:46:03.870856    5754 out.go:270] * 
	* 
	W0917 02:46:03.873391    5754 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:46:03.884755    5754 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (51.295125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-105000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
E0917 02:46:01.152547    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-105000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.564155292s)

                                                
                                                
-- stdout --
	* [no-preload-105000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-105000" primary control-plane node in "no-preload-105000" cluster
	* Restarting existing qemu2 VM for "no-preload-105000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-105000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:45:54.982207    5772 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:45:54.982353    5772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:54.982357    5772 out.go:358] Setting ErrFile to fd 2...
	I0917 02:45:54.982359    5772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:45:54.982486    5772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:45:54.983573    5772 out.go:352] Setting JSON to false
	I0917 02:45:54.999793    5772 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4524,"bootTime":1726561830,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:45:54.999863    5772 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:45:55.003669    5772 out.go:177] * [no-preload-105000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:45:55.010690    5772 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:45:55.010730    5772 notify.go:220] Checking for updates...
	I0917 02:45:55.018626    5772 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:45:55.021705    5772 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:45:55.024643    5772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:45:55.027650    5772 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:45:55.030687    5772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:45:55.033824    5772 config.go:182] Loaded profile config "no-preload-105000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:45:55.034100    5772 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:45:55.038655    5772 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:45:55.045653    5772 start.go:297] selected driver: qemu2
	I0917 02:45:55.045659    5772 start.go:901] validating driver "qemu2" against &{Name:no-preload-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:45:55.045724    5772 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:45:55.048137    5772 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:45:55.048165    5772 cni.go:84] Creating CNI manager for ""
	I0917 02:45:55.048193    5772 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:45:55.048210    5772 start.go:340] cluster config:
	{Name:no-preload-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-105000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:45:55.051850    5772 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:55.059645    5772 out.go:177] * Starting "no-preload-105000" primary control-plane node in "no-preload-105000" cluster
	I0917 02:45:55.063637    5772 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:45:55.063716    5772 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/no-preload-105000/config.json ...
	I0917 02:45:55.063752    5772 cache.go:107] acquiring lock: {Name:mkab1e37cbc263e4ad02c96576bb0c71290ec7b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:55.063756    5772 cache.go:107] acquiring lock: {Name:mk118a0ddf3a7ed975f8d05f0310eeac63d89a71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:55.063778    5772 cache.go:107] acquiring lock: {Name:mkf6d3b5ad97f9f93f3533d57fe6c066351e6c41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:55.063808    5772 cache.go:107] acquiring lock: {Name:mke98d0e041a764dd48c719b95ac989a1dbbbc1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:55.063815    5772 cache.go:115] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0917 02:45:55.063840    5772 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 98.375µs
	I0917 02:45:55.063857    5772 cache.go:115] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0917 02:45:55.063858    5772 cache.go:115] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0917 02:45:55.063860    5772 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0917 02:45:55.063861    5772 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 54.375µs
	I0917 02:45:55.063863    5772 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 118.541µs
	I0917 02:45:55.063869    5772 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0917 02:45:55.063826    5772 cache.go:115] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0917 02:45:55.063866    5772 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0917 02:45:55.063869    5772 cache.go:107] acquiring lock: {Name:mkd970860dc0e1ae13ee444896f1832a79741e80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:55.063871    5772 cache.go:107] acquiring lock: {Name:mk095b44492f1a5f521819d273dbcfa74241507f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:55.063916    5772 cache.go:115] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0917 02:45:55.063877    5772 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 143.291µs
	I0917 02:45:55.063935    5772 cache.go:107] acquiring lock: {Name:mk18d0819e0ba5c54d0eb2941c0ca175c1a6f940 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:55.063941    5772 cache.go:107] acquiring lock: {Name:mk26701528717324261fe270bfdf520dcb77e38b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:45:55.063952    5772 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0917 02:45:55.063920    5772 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 52.042µs
	I0917 02:45:55.063963    5772 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0917 02:45:55.063983    5772 cache.go:115] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0917 02:45:55.063987    5772 cache.go:115] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0917 02:45:55.063988    5772 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 117.625µs
	I0917 02:45:55.063996    5772 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0917 02:45:55.063990    5772 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 57.542µs
	I0917 02:45:55.063999    5772 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0917 02:45:55.064002    5772 cache.go:115] /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0917 02:45:55.064005    5772 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 87.583µs
	I0917 02:45:55.064009    5772 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0917 02:45:55.064013    5772 cache.go:87] Successfully saved all images to host disk.
	I0917 02:45:55.064152    5772 start.go:360] acquireMachinesLock for no-preload-105000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:45:56.421330    5772 start.go:364] duration metric: took 1.357165s to acquireMachinesLock for "no-preload-105000"
	I0917 02:45:56.421498    5772 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:45:56.421558    5772 fix.go:54] fixHost starting: 
	I0917 02:45:56.422252    5772 fix.go:112] recreateIfNeeded on no-preload-105000: state=Stopped err=<nil>
	W0917 02:45:56.422305    5772 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:45:56.428868    5772 out.go:177] * Restarting existing qemu2 VM for "no-preload-105000" ...
	I0917 02:45:56.441888    5772 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:45:56.442116    5772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:d1:cd:38:30:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2
	I0917 02:45:56.452297    5772 main.go:141] libmachine: STDOUT: 
	I0917 02:45:56.452412    5772 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:45:56.452568    5772 fix.go:56] duration metric: took 31.010333ms for fixHost
	I0917 02:45:56.452600    5772 start.go:83] releasing machines lock for "no-preload-105000", held for 31.232334ms
	W0917 02:45:56.452631    5772 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:45:56.452842    5772 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:45:56.452866    5772 start.go:729] Will try again in 5 seconds ...
	I0917 02:46:01.455075    5772 start.go:360] acquireMachinesLock for no-preload-105000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:01.455497    5772 start.go:364] duration metric: took 352.125µs to acquireMachinesLock for "no-preload-105000"
	I0917 02:46:01.455612    5772 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:46:01.455635    5772 fix.go:54] fixHost starting: 
	I0917 02:46:01.456394    5772 fix.go:112] recreateIfNeeded on no-preload-105000: state=Stopped err=<nil>
	W0917 02:46:01.456418    5772 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:46:01.463318    5772 out.go:177] * Restarting existing qemu2 VM for "no-preload-105000" ...
	I0917 02:46:01.467842    5772 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:01.468228    5772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:d1:cd:38:30:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/no-preload-105000/disk.qcow2
	I0917 02:46:01.477766    5772 main.go:141] libmachine: STDOUT: 
	I0917 02:46:01.477832    5772 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:01.477902    5772 fix.go:56] duration metric: took 22.27ms for fixHost
	I0917 02:46:01.477920    5772 start.go:83] releasing machines lock for "no-preload-105000", held for 22.403542ms
	W0917 02:46:01.478057    5772 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-105000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-105000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:01.494687    5772 out.go:201] 
	W0917 02:46:01.498856    5772 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:46:01.498884    5772 out.go:270] * 
	* 
	W0917 02:46:01.500857    5772 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:46:01.508755    5772 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-105000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000: exit status 7 (49.449959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-105000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000: exit status 7 (34.990208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-105000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-105000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-105000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.648084ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-105000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-105000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000: exit status 7 (33.300625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-105000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000: exit status 7 (30.1675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-105000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-105000 --alsologtostderr -v=1: exit status 83 (44.322ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-105000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-105000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:46:01.779108    5795 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:46:01.779264    5795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:01.779272    5795 out.go:358] Setting ErrFile to fd 2...
	I0917 02:46:01.779274    5795 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:01.779407    5795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:46:01.779653    5795 out.go:352] Setting JSON to false
	I0917 02:46:01.779662    5795 mustload.go:65] Loading cluster: no-preload-105000
	I0917 02:46:01.779885    5795 config.go:182] Loaded profile config "no-preload-105000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:46:01.782865    5795 out.go:177] * The control-plane node no-preload-105000 host is not running: state=Stopped
	I0917 02:46:01.786648    5795 out.go:177]   To start a cluster, run: "minikube start -p no-preload-105000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-105000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000: exit status 7 (29.82975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-105000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000: exit status 7 (30.094125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.521680875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-832000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-832000" primary control-plane node in "default-k8s-diff-port-832000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-832000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:46:02.211730    5822 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:46:02.211857    5822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:02.211859    5822 out.go:358] Setting ErrFile to fd 2...
	I0917 02:46:02.211862    5822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:02.211983    5822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:46:02.213082    5822 out.go:352] Setting JSON to false
	I0917 02:46:02.229280    5822 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4532,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:46:02.229346    5822 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:46:02.233687    5822 out.go:177] * [default-k8s-diff-port-832000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:46:02.240831    5822 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:46:02.240898    5822 notify.go:220] Checking for updates...
	I0917 02:46:02.247783    5822 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:46:02.251761    5822 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:46:02.254788    5822 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:46:02.257816    5822 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:46:02.260824    5822 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:46:02.264091    5822 config.go:182] Loaded profile config "embed-certs-347000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:46:02.264156    5822 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:46:02.264221    5822 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:46:02.268764    5822 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:46:02.274767    5822 start.go:297] selected driver: qemu2
	I0917 02:46:02.274773    5822 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:46:02.274780    5822 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:46:02.277089    5822 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 02:46:02.279747    5822 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:46:02.282868    5822 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:46:02.282884    5822 cni.go:84] Creating CNI manager for ""
	I0917 02:46:02.282907    5822 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:46:02.282914    5822 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:46:02.282938    5822 start.go:340] cluster config:
	{Name:default-k8s-diff-port-832000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-832000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:46:02.286703    5822 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:46:02.293787    5822 out.go:177] * Starting "default-k8s-diff-port-832000" primary control-plane node in "default-k8s-diff-port-832000" cluster
	I0917 02:46:02.297770    5822 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:46:02.297786    5822 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:46:02.297799    5822 cache.go:56] Caching tarball of preloaded images
	I0917 02:46:02.297868    5822 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:46:02.297874    5822 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:46:02.297932    5822 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/default-k8s-diff-port-832000/config.json ...
	I0917 02:46:02.297944    5822 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/default-k8s-diff-port-832000/config.json: {Name:mk86d1a081193aba79f9ef934ef1fb7c0052d29c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:46:02.298163    5822 start.go:360] acquireMachinesLock for default-k8s-diff-port-832000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:03.855158    5822 start.go:364] duration metric: took 1.556969541s to acquireMachinesLock for "default-k8s-diff-port-832000"
	I0917 02:46:03.855339    5822 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-832000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:46:03.855558    5822 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:46:03.863813    5822 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:46:03.914976    5822 start.go:159] libmachine.API.Create for "default-k8s-diff-port-832000" (driver="qemu2")
	I0917 02:46:03.915030    5822 client.go:168] LocalClient.Create starting
	I0917 02:46:03.915140    5822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:46:03.915196    5822 main.go:141] libmachine: Decoding PEM data...
	I0917 02:46:03.915213    5822 main.go:141] libmachine: Parsing certificate...
	I0917 02:46:03.915276    5822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:46:03.915319    5822 main.go:141] libmachine: Decoding PEM data...
	I0917 02:46:03.915338    5822 main.go:141] libmachine: Parsing certificate...
	I0917 02:46:03.915981    5822 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:46:04.086509    5822 main.go:141] libmachine: Creating SSH key...
	I0917 02:46:04.214896    5822 main.go:141] libmachine: Creating Disk image...
	I0917 02:46:04.214903    5822 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:46:04.215073    5822 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0917 02:46:04.224248    5822 main.go:141] libmachine: STDOUT: 
	I0917 02:46:04.224275    5822 main.go:141] libmachine: STDERR: 
	I0917 02:46:04.224334    5822 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2 +20000M
	I0917 02:46:04.232415    5822 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:46:04.232443    5822 main.go:141] libmachine: STDERR: 
	I0917 02:46:04.232466    5822 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0917 02:46:04.232472    5822 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:46:04.232481    5822 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:04.232515    5822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:c2:db:ba:2e:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0917 02:46:04.234281    5822 main.go:141] libmachine: STDOUT: 
	I0917 02:46:04.234296    5822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:04.234317    5822 client.go:171] duration metric: took 319.281917ms to LocalClient.Create
	I0917 02:46:06.236515    5822 start.go:128] duration metric: took 2.380933833s to createHost
	I0917 02:46:06.236593    5822 start.go:83] releasing machines lock for "default-k8s-diff-port-832000", held for 2.381415417s
	W0917 02:46:06.236644    5822 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:06.242940    5822 out.go:177] * Deleting "default-k8s-diff-port-832000" in qemu2 ...
	W0917 02:46:06.275446    5822 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:06.275471    5822 start.go:729] Will try again in 5 seconds ...
	I0917 02:46:11.277673    5822 start.go:360] acquireMachinesLock for default-k8s-diff-port-832000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:11.278062    5822 start.go:364] duration metric: took 298.833µs to acquireMachinesLock for "default-k8s-diff-port-832000"
	I0917 02:46:11.278200    5822 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-832000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:46:11.278482    5822 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:46:11.283054    5822 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:46:11.333365    5822 start.go:159] libmachine.API.Create for "default-k8s-diff-port-832000" (driver="qemu2")
	I0917 02:46:11.333439    5822 client.go:168] LocalClient.Create starting
	I0917 02:46:11.333587    5822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:46:11.333665    5822 main.go:141] libmachine: Decoding PEM data...
	I0917 02:46:11.333687    5822 main.go:141] libmachine: Parsing certificate...
	I0917 02:46:11.333756    5822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:46:11.333804    5822 main.go:141] libmachine: Decoding PEM data...
	I0917 02:46:11.333817    5822 main.go:141] libmachine: Parsing certificate...
	I0917 02:46:11.334433    5822 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:46:11.506092    5822 main.go:141] libmachine: Creating SSH key...
	I0917 02:46:11.621148    5822 main.go:141] libmachine: Creating Disk image...
	I0917 02:46:11.621154    5822 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:46:11.621350    5822 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0917 02:46:11.630369    5822 main.go:141] libmachine: STDOUT: 
	I0917 02:46:11.630386    5822 main.go:141] libmachine: STDERR: 
	I0917 02:46:11.630448    5822 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2 +20000M
	I0917 02:46:11.638539    5822 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:46:11.638557    5822 main.go:141] libmachine: STDERR: 
	I0917 02:46:11.638570    5822 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0917 02:46:11.638579    5822 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:46:11.638599    5822 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:11.638626    5822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:32:4b:53:06:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0917 02:46:11.640322    5822 main.go:141] libmachine: STDOUT: 
	I0917 02:46:11.640337    5822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:11.640350    5822 client.go:171] duration metric: took 306.893917ms to LocalClient.Create
	I0917 02:46:13.642580    5822 start.go:128] duration metric: took 2.364026417s to createHost
	I0917 02:46:13.642683    5822 start.go:83] releasing machines lock for "default-k8s-diff-port-832000", held for 2.364572833s
	W0917 02:46:13.642972    5822 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-832000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-832000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:13.655414    5822 out.go:201] 
	W0917 02:46:13.665556    5822 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:46:13.665589    5822 out.go:270] * 
	* 
	W0917 02:46:13.667961    5822 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:46:13.680504    5822 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (67.553625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-347000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-347000 create -f testdata/busybox.yaml: exit status 1 (31.279667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-347000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-347000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (34.008083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (33.657917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-347000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-347000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-347000 describe deploy/metrics-server -n kube-system: exit status 1 (27.381917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-347000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-347000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (29.147333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.547218708s)

                                                
                                                
-- stdout --
	* [embed-certs-347000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-347000" primary control-plane node in "embed-certs-347000" cluster
	* Restarting existing qemu2 VM for "embed-certs-347000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-347000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:46:07.199958    5866 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:46:07.200105    5866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:07.200108    5866 out.go:358] Setting ErrFile to fd 2...
	I0917 02:46:07.200111    5866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:07.200245    5866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:46:07.201243    5866 out.go:352] Setting JSON to false
	I0917 02:46:07.217369    5866 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4537,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:46:07.217437    5866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:46:07.221933    5866 out.go:177] * [embed-certs-347000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:46:07.229015    5866 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:46:07.229060    5866 notify.go:220] Checking for updates...
	I0917 02:46:07.234868    5866 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:46:07.237911    5866 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:46:07.240958    5866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:46:07.242422    5866 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:46:07.245974    5866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:46:07.249203    5866 config.go:182] Loaded profile config "embed-certs-347000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:46:07.249445    5866 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:46:07.253792    5866 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:46:07.260952    5866 start.go:297] selected driver: qemu2
	I0917 02:46:07.260960    5866 start.go:901] validating driver "qemu2" against &{Name:embed-certs-347000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:46:07.261035    5866 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:46:07.263374    5866 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:46:07.263403    5866 cni.go:84] Creating CNI manager for ""
	I0917 02:46:07.263428    5866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:46:07.263458    5866 start.go:340] cluster config:
	{Name:embed-certs-347000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-347000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:46:07.266742    5866 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:46:07.276002    5866 out.go:177] * Starting "embed-certs-347000" primary control-plane node in "embed-certs-347000" cluster
	I0917 02:46:07.279914    5866 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:46:07.279929    5866 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:46:07.279936    5866 cache.go:56] Caching tarball of preloaded images
	I0917 02:46:07.279996    5866 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:46:07.280001    5866 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:46:07.280053    5866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/embed-certs-347000/config.json ...
	I0917 02:46:07.280503    5866 start.go:360] acquireMachinesLock for embed-certs-347000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:07.280539    5866 start.go:364] duration metric: took 29.958µs to acquireMachinesLock for "embed-certs-347000"
	I0917 02:46:07.280548    5866 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:46:07.280555    5866 fix.go:54] fixHost starting: 
	I0917 02:46:07.280672    5866 fix.go:112] recreateIfNeeded on embed-certs-347000: state=Stopped err=<nil>
	W0917 02:46:07.280682    5866 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:46:07.283935    5866 out.go:177] * Restarting existing qemu2 VM for "embed-certs-347000" ...
	I0917 02:46:07.291914    5866 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:07.291960    5866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:14:6b:34:1e:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2
	I0917 02:46:07.293971    5866 main.go:141] libmachine: STDOUT: 
	I0917 02:46:07.293987    5866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:07.294021    5866 fix.go:56] duration metric: took 13.467209ms for fixHost
	I0917 02:46:07.294025    5866 start.go:83] releasing machines lock for "embed-certs-347000", held for 13.482041ms
	W0917 02:46:07.294030    5866 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:46:07.294075    5866 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:07.294079    5866 start.go:729] Will try again in 5 seconds ...
	I0917 02:46:12.296183    5866 start.go:360] acquireMachinesLock for embed-certs-347000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:13.642846    5866 start.go:364] duration metric: took 1.346540459s to acquireMachinesLock for "embed-certs-347000"
	I0917 02:46:13.643029    5866 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:46:13.643047    5866 fix.go:54] fixHost starting: 
	I0917 02:46:13.643717    5866 fix.go:112] recreateIfNeeded on embed-certs-347000: state=Stopped err=<nil>
	W0917 02:46:13.643743    5866 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:46:13.662549    5866 out.go:177] * Restarting existing qemu2 VM for "embed-certs-347000" ...
	I0917 02:46:13.668483    5866 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:13.668658    5866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:14:6b:34:1e:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/embed-certs-347000/disk.qcow2
	I0917 02:46:13.678108    5866 main.go:141] libmachine: STDOUT: 
	I0917 02:46:13.678185    5866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:13.678291    5866 fix.go:56] duration metric: took 35.241875ms for fixHost
	I0917 02:46:13.678311    5866 start.go:83] releasing machines lock for "embed-certs-347000", held for 35.424209ms
	W0917 02:46:13.678501    5866 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-347000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-347000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:13.692489    5866 out.go:201] 
	W0917 02:46:13.696548    5866 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:46:13.696595    5866 out.go:270] * 
	* 
	W0917 02:46:13.699471    5866 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:46:13.708542    5866 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-347000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (57.34725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-832000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-832000 create -f testdata/busybox.yaml: exit status 1 (31.803917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-832000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-832000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (31.3885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (31.969375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-347000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (34.335833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-347000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-347000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-347000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.643666ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-347000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-347000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (30.381084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-832000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-832000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-832000 describe deploy/metrics-server -n kube-system: exit status 1 (29.204125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-832000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-832000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (34.887042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-347000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (31.341625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-347000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-347000 --alsologtostderr -v=1: exit status 83 (51.313542ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-347000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-347000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:46:13.987919    5899 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:46:13.988058    5899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:13.988061    5899 out.go:358] Setting ErrFile to fd 2...
	I0917 02:46:13.988064    5899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:13.988197    5899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:46:13.988438    5899 out.go:352] Setting JSON to false
	I0917 02:46:13.988445    5899 mustload.go:65] Loading cluster: embed-certs-347000
	I0917 02:46:13.988683    5899 config.go:182] Loaded profile config "embed-certs-347000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:46:13.993438    5899 out.go:177] * The control-plane node embed-certs-347000 host is not running: state=Stopped
	I0917 02:46:14.001388    5899 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-347000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-347000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (29.420833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (28.521542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-347000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-371000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-371000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.954861792s)

                                                
                                                
-- stdout --
	* [newest-cni-371000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-371000" primary control-plane node in "newest-cni-371000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-371000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:46:14.309791    5924 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:46:14.309920    5924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:14.309924    5924 out.go:358] Setting ErrFile to fd 2...
	I0917 02:46:14.309926    5924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:14.310052    5924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:46:14.311088    5924 out.go:352] Setting JSON to false
	I0917 02:46:14.327390    5924 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4544,"bootTime":1726561830,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:46:14.327461    5924 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:46:14.332496    5924 out.go:177] * [newest-cni-371000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:46:14.339505    5924 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:46:14.339561    5924 notify.go:220] Checking for updates...
	I0917 02:46:14.345452    5924 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:46:14.348399    5924 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:46:14.351432    5924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:46:14.354432    5924 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:46:14.357474    5924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:46:14.360740    5924 config.go:182] Loaded profile config "default-k8s-diff-port-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:46:14.360801    5924 config.go:182] Loaded profile config "multinode-661000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:46:14.360855    5924 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:46:14.365417    5924 out.go:177] * Using the qemu2 driver based on user configuration
	I0917 02:46:14.372420    5924 start.go:297] selected driver: qemu2
	I0917 02:46:14.372426    5924 start.go:901] validating driver "qemu2" against <nil>
	I0917 02:46:14.372433    5924 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:46:14.374857    5924 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0917 02:46:14.374895    5924 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0917 02:46:14.382382    5924 out.go:177] * Automatically selected the socket_vmnet network
	I0917 02:46:14.385446    5924 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0917 02:46:14.385468    5924 cni.go:84] Creating CNI manager for ""
	I0917 02:46:14.385494    5924 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:46:14.385499    5924 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 02:46:14.385532    5924 start.go:340] cluster config:
	{Name:newest-cni-371000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:46:14.389229    5924 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:46:14.396300    5924 out.go:177] * Starting "newest-cni-371000" primary control-plane node in "newest-cni-371000" cluster
	I0917 02:46:14.400371    5924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:46:14.400387    5924 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:46:14.400394    5924 cache.go:56] Caching tarball of preloaded images
	I0917 02:46:14.400456    5924 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:46:14.400463    5924 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:46:14.400523    5924 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/newest-cni-371000/config.json ...
	I0917 02:46:14.400534    5924 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/newest-cni-371000/config.json: {Name:mk22731af8c79ac17e8861380504f5023276cfe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 02:46:14.400773    5924 start.go:360] acquireMachinesLock for newest-cni-371000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:14.400810    5924 start.go:364] duration metric: took 31µs to acquireMachinesLock for "newest-cni-371000"
	I0917 02:46:14.400822    5924 start.go:93] Provisioning new machine with config: &{Name:newest-cni-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:46:14.400856    5924 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:46:14.405307    5924 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:46:14.423471    5924 start.go:159] libmachine.API.Create for "newest-cni-371000" (driver="qemu2")
	I0917 02:46:14.423508    5924 client.go:168] LocalClient.Create starting
	I0917 02:46:14.423577    5924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:46:14.423607    5924 main.go:141] libmachine: Decoding PEM data...
	I0917 02:46:14.423621    5924 main.go:141] libmachine: Parsing certificate...
	I0917 02:46:14.423657    5924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:46:14.423685    5924 main.go:141] libmachine: Decoding PEM data...
	I0917 02:46:14.423692    5924 main.go:141] libmachine: Parsing certificate...
	I0917 02:46:14.424046    5924 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:46:14.586025    5924 main.go:141] libmachine: Creating SSH key...
	I0917 02:46:14.790847    5924 main.go:141] libmachine: Creating Disk image...
	I0917 02:46:14.790855    5924 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:46:14.791063    5924 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2
	I0917 02:46:14.800572    5924 main.go:141] libmachine: STDOUT: 
	I0917 02:46:14.800594    5924 main.go:141] libmachine: STDERR: 
	I0917 02:46:14.800651    5924 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2 +20000M
	I0917 02:46:14.808657    5924 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:46:14.808672    5924 main.go:141] libmachine: STDERR: 
	I0917 02:46:14.808685    5924 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2
	I0917 02:46:14.808688    5924 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:46:14.808700    5924 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:14.808729    5924 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:5f:a5:85:90:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2
	I0917 02:46:14.810275    5924 main.go:141] libmachine: STDOUT: 
	I0917 02:46:14.810289    5924 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:14.810310    5924 client.go:171] duration metric: took 386.800084ms to LocalClient.Create
	I0917 02:46:16.812478    5924 start.go:128] duration metric: took 2.411612084s to createHost
	I0917 02:46:16.812522    5924 start.go:83] releasing machines lock for "newest-cni-371000", held for 2.411718s
	W0917 02:46:16.812587    5924 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:16.824914    5924 out.go:177] * Deleting "newest-cni-371000" in qemu2 ...
	W0917 02:46:16.858156    5924 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:16.858187    5924 start.go:729] Will try again in 5 seconds ...
	I0917 02:46:21.860331    5924 start.go:360] acquireMachinesLock for newest-cni-371000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:21.860791    5924 start.go:364] duration metric: took 328.334µs to acquireMachinesLock for "newest-cni-371000"
	I0917 02:46:21.860930    5924 start.go:93] Provisioning new machine with config: &{Name:newest-cni-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 02:46:21.861185    5924 start.go:125] createHost starting for "" (driver="qemu2")
	I0917 02:46:21.870629    5924 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0917 02:46:21.921213    5924 start.go:159] libmachine.API.Create for "newest-cni-371000" (driver="qemu2")
	I0917 02:46:21.921272    5924 client.go:168] LocalClient.Create starting
	I0917 02:46:21.921381    5924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/ca.pem
	I0917 02:46:21.921451    5924 main.go:141] libmachine: Decoding PEM data...
	I0917 02:46:21.921467    5924 main.go:141] libmachine: Parsing certificate...
	I0917 02:46:21.921529    5924 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1056/.minikube/certs/cert.pem
	I0917 02:46:21.921577    5924 main.go:141] libmachine: Decoding PEM data...
	I0917 02:46:21.921587    5924 main.go:141] libmachine: Parsing certificate...
	I0917 02:46:21.922105    5924 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso...
	I0917 02:46:22.096413    5924 main.go:141] libmachine: Creating SSH key...
	I0917 02:46:22.159794    5924 main.go:141] libmachine: Creating Disk image...
	I0917 02:46:22.159800    5924 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0917 02:46:22.159992    5924 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2
	I0917 02:46:22.169190    5924 main.go:141] libmachine: STDOUT: 
	I0917 02:46:22.169220    5924 main.go:141] libmachine: STDERR: 
	I0917 02:46:22.169274    5924 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2 +20000M
	I0917 02:46:22.177109    5924 main.go:141] libmachine: STDOUT: Image resized.
	
	I0917 02:46:22.177126    5924 main.go:141] libmachine: STDERR: 
	I0917 02:46:22.177139    5924 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2
	I0917 02:46:22.177143    5924 main.go:141] libmachine: Starting QEMU VM...
	I0917 02:46:22.177151    5924 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:22.177177    5924 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:58:56:51:52:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2
	I0917 02:46:22.178721    5924 main.go:141] libmachine: STDOUT: 
	I0917 02:46:22.178736    5924 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:22.178749    5924 client.go:171] duration metric: took 257.471041ms to LocalClient.Create
	I0917 02:46:24.180918    5924 start.go:128] duration metric: took 2.319716s to createHost
	I0917 02:46:24.180967    5924 start.go:83] releasing machines lock for "newest-cni-371000", held for 2.320169042s
	W0917 02:46:24.181295    5924 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-371000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-371000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:24.199959    5924 out.go:201] 
	W0917 02:46:24.205952    5924 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:46:24.206016    5924 out.go:270] * 
	* 
	W0917 02:46:24.208585    5924 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:46:24.217898    5924 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-371000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000: exit status 7 (64.715625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
E0917 02:46:18.052041    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.40534625s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-832000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-832000" primary control-plane node in "default-k8s-diff-port-832000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-832000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-832000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:46:17.901931    5952 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:46:17.902050    5952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:17.902053    5952 out.go:358] Setting ErrFile to fd 2...
	I0917 02:46:17.902056    5952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:17.902185    5952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:46:17.903204    5952 out.go:352] Setting JSON to false
	I0917 02:46:17.919182    5952 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4547,"bootTime":1726561830,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:46:17.919245    5952 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:46:17.924424    5952 out.go:177] * [default-k8s-diff-port-832000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:46:17.931276    5952 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:46:17.931330    5952 notify.go:220] Checking for updates...
	I0917 02:46:17.938366    5952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:46:17.939734    5952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:46:17.943370    5952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:46:17.946387    5952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:46:17.949423    5952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:46:17.952588    5952 config.go:182] Loaded profile config "default-k8s-diff-port-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:46:17.952861    5952 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:46:17.957421    5952 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:46:17.964354    5952 start.go:297] selected driver: qemu2
	I0917 02:46:17.964362    5952 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-832000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:46:17.964424    5952 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:46:17.966767    5952 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 02:46:17.966791    5952 cni.go:84] Creating CNI manager for ""
	I0917 02:46:17.966815    5952 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:46:17.966833    5952 start.go:340] cluster config:
	{Name:default-k8s-diff-port-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-832000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:46:17.970237    5952 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:46:17.976390    5952 out.go:177] * Starting "default-k8s-diff-port-832000" primary control-plane node in "default-k8s-diff-port-832000" cluster
	I0917 02:46:17.980348    5952 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:46:17.980362    5952 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:46:17.980372    5952 cache.go:56] Caching tarball of preloaded images
	I0917 02:46:17.980437    5952 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:46:17.980442    5952 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:46:17.980495    5952 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/default-k8s-diff-port-832000/config.json ...
	I0917 02:46:17.980947    5952 start.go:360] acquireMachinesLock for default-k8s-diff-port-832000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:17.980983    5952 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "default-k8s-diff-port-832000"
	I0917 02:46:17.980992    5952 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:46:17.980998    5952 fix.go:54] fixHost starting: 
	I0917 02:46:17.981119    5952 fix.go:112] recreateIfNeeded on default-k8s-diff-port-832000: state=Stopped err=<nil>
	W0917 02:46:17.981127    5952 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:46:17.985388    5952 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-832000" ...
	I0917 02:46:17.993349    5952 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:17.993380    5952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:32:4b:53:06:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0917 02:46:17.995421    5952 main.go:141] libmachine: STDOUT: 
	I0917 02:46:17.995439    5952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:17.995476    5952 fix.go:56] duration metric: took 14.479334ms for fixHost
	I0917 02:46:17.995481    5952 start.go:83] releasing machines lock for "default-k8s-diff-port-832000", held for 14.493125ms
	W0917 02:46:17.995486    5952 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:46:17.995516    5952 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:17.995521    5952 start.go:729] Will try again in 5 seconds ...
	I0917 02:46:22.997824    5952 start.go:360] acquireMachinesLock for default-k8s-diff-port-832000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:24.181124    5952 start.go:364] duration metric: took 1.183137333s to acquireMachinesLock for "default-k8s-diff-port-832000"
	I0917 02:46:24.181305    5952 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:46:24.181321    5952 fix.go:54] fixHost starting: 
	I0917 02:46:24.182077    5952 fix.go:112] recreateIfNeeded on default-k8s-diff-port-832000: state=Stopped err=<nil>
	W0917 02:46:24.182105    5952 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:46:24.202852    5952 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-832000" ...
	I0917 02:46:24.209880    5952 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:24.210092    5952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:32:4b:53:06:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0917 02:46:24.219308    5952 main.go:141] libmachine: STDOUT: 
	I0917 02:46:24.219363    5952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:24.219447    5952 fix.go:56] duration metric: took 38.126083ms for fixHost
	I0917 02:46:24.219463    5952 start.go:83] releasing machines lock for "default-k8s-diff-port-832000", held for 38.298416ms
	W0917 02:46:24.219623    5952 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-832000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-832000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:24.236002    5952 out.go:201] 
	W0917 02:46:24.240529    5952 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:46:24.240561    5952 out.go:270] * 
	* 
	W0917 02:46:24.242674    5952 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:46:24.262933    5952 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (41.803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-832000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (34.490833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-832000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-832000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-832000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.022791ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-832000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-832000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (31.158458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-832000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (28.55825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-832000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-832000 --alsologtostderr -v=1: exit status 83 (40.975375ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-832000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-832000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:46:24.509543    5983 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:46:24.509704    5983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:24.509707    5983 out.go:358] Setting ErrFile to fd 2...
	I0917 02:46:24.509710    5983 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:24.509838    5983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:46:24.510052    5983 out.go:352] Setting JSON to false
	I0917 02:46:24.510058    5983 mustload.go:65] Loading cluster: default-k8s-diff-port-832000
	I0917 02:46:24.510276    5983 config.go:182] Loaded profile config "default-k8s-diff-port-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:46:24.514935    5983 out.go:177] * The control-plane node default-k8s-diff-port-832000 host is not running: state=Stopped
	I0917 02:46:24.518915    5983 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-832000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-832000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (29.8885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (29.457333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-371000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-371000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.187826s)

                                                
                                                
-- stdout --
	* [newest-cni-371000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-371000" primary control-plane node in "newest-cni-371000" cluster
	* Restarting existing qemu2 VM for "newest-cni-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-371000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:46:27.464447    6018 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:46:27.464565    6018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:27.464568    6018 out.go:358] Setting ErrFile to fd 2...
	I0917 02:46:27.464571    6018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:27.464723    6018 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:46:27.465742    6018 out.go:352] Setting JSON to false
	I0917 02:46:27.481811    6018 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4557,"bootTime":1726561830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 02:46:27.481876    6018 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 02:46:27.486820    6018 out.go:177] * [newest-cni-371000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 02:46:27.494986    6018 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 02:46:27.495072    6018 notify.go:220] Checking for updates...
	I0917 02:46:27.502932    6018 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 02:46:27.505989    6018 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 02:46:27.508892    6018 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 02:46:27.511949    6018 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 02:46:27.514932    6018 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 02:46:27.518134    6018 config.go:182] Loaded profile config "newest-cni-371000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:46:27.518390    6018 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 02:46:27.522935    6018 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 02:46:27.529903    6018 start.go:297] selected driver: qemu2
	I0917 02:46:27.529908    6018 start.go:901] validating driver "qemu2" against &{Name:newest-cni-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-371000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:46:27.529959    6018 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 02:46:27.532334    6018 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0917 02:46:27.532365    6018 cni.go:84] Creating CNI manager for ""
	I0917 02:46:27.532394    6018 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 02:46:27.532423    6018 start.go:340] cluster config:
	{Name:newest-cni-371000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-371000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 02:46:27.536022    6018 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 02:46:27.543777    6018 out.go:177] * Starting "newest-cni-371000" primary control-plane node in "newest-cni-371000" cluster
	I0917 02:46:27.547930    6018 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 02:46:27.547950    6018 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 02:46:27.547959    6018 cache.go:56] Caching tarball of preloaded images
	I0917 02:46:27.548044    6018 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 02:46:27.548050    6018 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 02:46:27.548117    6018 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/newest-cni-371000/config.json ...
	I0917 02:46:27.548569    6018 start.go:360] acquireMachinesLock for newest-cni-371000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:27.548606    6018 start.go:364] duration metric: took 30.583µs to acquireMachinesLock for "newest-cni-371000"
	I0917 02:46:27.548614    6018 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:46:27.548622    6018 fix.go:54] fixHost starting: 
	I0917 02:46:27.548768    6018 fix.go:112] recreateIfNeeded on newest-cni-371000: state=Stopped err=<nil>
	W0917 02:46:27.548776    6018 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:46:27.551897    6018 out.go:177] * Restarting existing qemu2 VM for "newest-cni-371000" ...
	I0917 02:46:27.559952    6018 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:27.559997    6018 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:58:56:51:52:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2
	I0917 02:46:27.562083    6018 main.go:141] libmachine: STDOUT: 
	I0917 02:46:27.562104    6018 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:27.562136    6018 fix.go:56] duration metric: took 13.515875ms for fixHost
	I0917 02:46:27.562140    6018 start.go:83] releasing machines lock for "newest-cni-371000", held for 13.530125ms
	W0917 02:46:27.562149    6018 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:46:27.562191    6018 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:27.562196    6018 start.go:729] Will try again in 5 seconds ...
	I0917 02:46:32.564334    6018 start.go:360] acquireMachinesLock for newest-cni-371000: {Name:mk3e7d188bcefe956eb28fdd9b7680a9e805dac7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 02:46:32.564969    6018 start.go:364] duration metric: took 449.5µs to acquireMachinesLock for "newest-cni-371000"
	I0917 02:46:32.565160    6018 start.go:96] Skipping create...Using existing machine configuration
	I0917 02:46:32.565181    6018 fix.go:54] fixHost starting: 
	I0917 02:46:32.565971    6018 fix.go:112] recreateIfNeeded on newest-cni-371000: state=Stopped err=<nil>
	W0917 02:46:32.565998    6018 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 02:46:32.574574    6018 out.go:177] * Restarting existing qemu2 VM for "newest-cni-371000" ...
	I0917 02:46:32.577628    6018 qemu.go:418] Using hvf for hardware acceleration
	I0917 02:46:32.577931    6018 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:58:56:51:52:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1056/.minikube/machines/newest-cni-371000/disk.qcow2
	I0917 02:46:32.587436    6018 main.go:141] libmachine: STDOUT: 
	I0917 02:46:32.587498    6018 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0917 02:46:32.587569    6018 fix.go:56] duration metric: took 22.392125ms for fixHost
	I0917 02:46:32.587586    6018 start.go:83] releasing machines lock for "newest-cni-371000", held for 22.556541ms
	W0917 02:46:32.587749    6018 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-371000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-371000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0917 02:46:32.595590    6018 out.go:201] 
	W0917 02:46:32.599542    6018 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0917 02:46:32.599609    6018 out.go:270] * 
	* 
	W0917 02:46:32.602242    6018 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 02:46:32.610612    6018 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-371000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000: exit status 7 (68.082458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-371000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000: exit status 7 (30.6725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-371000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-371000 --alsologtostderr -v=1: exit status 83 (40.827792ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-371000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-371000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 02:46:32.795918    6032 out.go:345] Setting OutFile to fd 1 ...
	I0917 02:46:32.796060    6032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:32.796063    6032 out.go:358] Setting ErrFile to fd 2...
	I0917 02:46:32.796066    6032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 02:46:32.796203    6032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 02:46:32.796420    6032 out.go:352] Setting JSON to false
	I0917 02:46:32.796427    6032 mustload.go:65] Loading cluster: newest-cni-371000
	I0917 02:46:32.796655    6032 config.go:182] Loaded profile config "newest-cni-371000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 02:46:32.800383    6032 out.go:177] * The control-plane node newest-cni-371000 host is not running: state=Stopped
	I0917 02:46:32.804132    6032 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-371000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-371000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000: exit status 7 (30.164333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-371000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000: exit status 7 (30.142875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-371000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/270)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 10.25
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.39
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 203.19
29 TestAddons/serial/Volcano 39.23
31 TestAddons/serial/GCPAuth/Namespaces 0.09
34 TestAddons/parallel/Ingress 18.43
35 TestAddons/parallel/InspektorGadget 10.25
36 TestAddons/parallel/MetricsServer 5.28
39 TestAddons/parallel/CSI 53.32
40 TestAddons/parallel/Headlamp 16.62
41 TestAddons/parallel/CloudSpanner 5.16
42 TestAddons/parallel/LocalPath 40.98
43 TestAddons/parallel/NvidiaDevicePlugin 5.2
44 TestAddons/parallel/Yakd 10.28
45 TestAddons/StoppedEnableDisable 12.42
53 TestHyperKitDriverInstallOrUpdate 11.24
56 TestErrorSpam/setup 34.62
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.25
59 TestErrorSpam/pause 0.7
60 TestErrorSpam/unpause 0.63
61 TestErrorSpam/stop 55.27
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 78.65
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.26
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.6
73 TestFunctional/serial/CacheCmd/cache/add_local 1.35
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.6
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.83
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.01
81 TestFunctional/serial/ExtraConfig 35.89
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.66
84 TestFunctional/serial/LogsFileCmd 0.59
85 TestFunctional/serial/InvalidService 3.62
87 TestFunctional/parallel/ConfigCmd 0.22
88 TestFunctional/parallel/DashboardCmd 6.67
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.12
91 TestFunctional/parallel/StatusCmd 0.25
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 26.62
99 TestFunctional/parallel/SSHCmd 0.16
100 TestFunctional/parallel/CpCmd 0.43
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.41
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.11
111 TestFunctional/parallel/License 0.33
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.2
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.09
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.22
119 TestFunctional/parallel/ImageCommands/Setup 1.78
120 TestFunctional/parallel/DockerEnv/bash 0.33
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
124 TestFunctional/parallel/ServiceCmd/DeployApp 12.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.45
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.25
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.2
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
137 TestFunctional/parallel/ServiceCmd/List 0.13
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
140 TestFunctional/parallel/ServiceCmd/Format 0.1
141 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
149 TestFunctional/parallel/ProfileCmd/profile_list 0.13
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
151 TestFunctional/parallel/MountCmd/any-port 5.29
152 TestFunctional/parallel/MountCmd/specific-port 1.18
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.55
154 TestFunctional/delete_echo-server_images 0.05
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 177
161 TestMultiControlPlane/serial/DeployApp 5.03
162 TestMultiControlPlane/serial/PingHostFromPods 0.74
163 TestMultiControlPlane/serial/AddWorkerNode 53.23
164 TestMultiControlPlane/serial/NodeLabels 0.13
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
166 TestMultiControlPlane/serial/CopyFile 4.13
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.08
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 1.93
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.21
208 TestMainNoArgs 0.03
255 TestStoppedBinaryUpgrade/Setup 1.35
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
272 TestNoKubernetes/serial/ProfileList 31.32
273 TestNoKubernetes/serial/Stop 3.69
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
290 TestStartStop/group/old-k8s-version/serial/Stop 3.29
291 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
301 TestStartStop/group/no-preload/serial/Stop 2.09
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
314 TestStartStop/group/embed-certs/serial/Stop 2.87
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.76
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
332 TestStartStop/group/newest-cni/serial/Stop 2.94
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-459000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-459000: exit status 85 (99.808125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-459000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |          |
	|         | -p download-only-459000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 01:37:24
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:37:24.831456    1557 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:37:24.831617    1557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:24.831621    1557 out.go:358] Setting ErrFile to fd 2...
	I0917 01:37:24.831623    1557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:24.831735    1557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	W0917 01:37:24.831823    1557 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19648-1056/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19648-1056/.minikube/config/config.json: no such file or directory
	I0917 01:37:24.833192    1557 out.go:352] Setting JSON to true
	I0917 01:37:24.853545    1557 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":414,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 01:37:24.853610    1557 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:37:24.858208    1557 out.go:97] [download-only-459000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 01:37:24.858355    1557 notify.go:220] Checking for updates...
	W0917 01:37:24.858377    1557 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 01:37:24.861250    1557 out.go:169] MINIKUBE_LOCATION=19648
	I0917 01:37:24.864301    1557 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 01:37:24.868229    1557 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 01:37:24.871211    1557 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:37:24.874228    1557 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	W0917 01:37:24.879286    1557 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 01:37:24.879519    1557 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:37:24.883198    1557 out.go:97] Using the qemu2 driver based on user configuration
	I0917 01:37:24.883221    1557 start.go:297] selected driver: qemu2
	I0917 01:37:24.883236    1557 start.go:901] validating driver "qemu2" against <nil>
	I0917 01:37:24.883316    1557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 01:37:24.886225    1557 out.go:169] Automatically selected the socket_vmnet network
	I0917 01:37:24.891960    1557 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0917 01:37:24.892050    1557 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 01:37:24.892095    1557 cni.go:84] Creating CNI manager for ""
	I0917 01:37:24.892144    1557 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 01:37:24.892194    1557 start.go:340] cluster config:
	{Name:download-only-459000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-459000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:37:24.897269    1557 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:37:24.902190    1557 out.go:97] Downloading VM boot image ...
	I0917 01:37:24.902204    1557 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/iso/arm64/minikube-v1.34.0-1726415472-19646-arm64.iso
	I0917 01:37:32.511796    1557 out.go:97] Starting "download-only-459000" primary control-plane node in "download-only-459000" cluster
	I0917 01:37:32.511824    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 01:37:32.603171    1557 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 01:37:32.603199    1557 cache.go:56] Caching tarball of preloaded images
	I0917 01:37:32.603436    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 01:37:32.607681    1557 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0917 01:37:32.607689    1557 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 01:37:32.700044    1557 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0917 01:37:41.832399    1557 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 01:37:41.832581    1557 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0917 01:37:42.527076    1557 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 01:37:42.527278    1557 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/download-only-459000/config.json ...
	I0917 01:37:42.527296    1557 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/download-only-459000/config.json: {Name:mk627dcd15406011f4f6d1943d972dd426926a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:37:42.527514    1557 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 01:37:42.527759    1557 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0917 01:37:43.302316    1557 out.go:193] 
	W0917 01:37:43.308767    1557 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19648-1056/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0 0x1047997a0] Decompressors:map[bz2:0x14000681010 gz:0x14000681018 tar:0x14000680fc0 tar.bz2:0x14000680fd0 tar.gz:0x14000680fe0 tar.xz:0x14000680ff0 tar.zst:0x14000681000 tbz2:0x14000680fd0 tgz:0x14000680fe0 txz:0x14000680ff0 tzst:0x14000681000 xz:0x14000681020 zip:0x14000681030 zst:0x14000681028] Getters:map[file:0x14001516360 http:0x140007660a0 https:0x140007660f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0917 01:37:43.308797    1557 out_reason.go:110] 
	W0917 01:37:43.316623    1557 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 01:37:43.320725    1557 out.go:193] 
	
	
	* The control-plane node download-only-459000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-459000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-459000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (10.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-406000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-406000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (10.246865625s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (10.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-406000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-406000: exit status 85 (76.966208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-459000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |                     |
	|         | -p download-only-459000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| delete  | -p download-only-459000        | download-only-459000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT | 17 Sep 24 01:37 PDT |
	| start   | -o=json --download-only        | download-only-406000 | jenkins | v1.34.0 | 17 Sep 24 01:37 PDT |                     |
	|         | -p download-only-406000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 01:37:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:37:43.747207    1584 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:37:43.747348    1584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:43.747351    1584 out.go:358] Setting ErrFile to fd 2...
	I0917 01:37:43.747353    1584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:37:43.747489    1584 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 01:37:43.748563    1584 out.go:352] Setting JSON to true
	I0917 01:37:43.767701    1584 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":433,"bootTime":1726561830,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 01:37:43.767776    1584 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:37:43.772667    1584 out.go:97] [download-only-406000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 01:37:43.772744    1584 notify.go:220] Checking for updates...
	I0917 01:37:43.776636    1584 out.go:169] MINIKUBE_LOCATION=19648
	I0917 01:37:43.779637    1584 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 01:37:43.783658    1584 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 01:37:43.786598    1584 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:37:43.789645    1584 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	W0917 01:37:43.795653    1584 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 01:37:43.795812    1584 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:37:43.799645    1584 out.go:97] Using the qemu2 driver based on user configuration
	I0917 01:37:43.799656    1584 start.go:297] selected driver: qemu2
	I0917 01:37:43.799660    1584 start.go:901] validating driver "qemu2" against <nil>
	I0917 01:37:43.799718    1584 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 01:37:43.803651    1584 out.go:169] Automatically selected the socket_vmnet network
	I0917 01:37:43.809084    1584 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0917 01:37:43.809175    1584 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 01:37:43.809193    1584 cni.go:84] Creating CNI manager for ""
	I0917 01:37:43.809219    1584 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 01:37:43.809231    1584 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 01:37:43.809279    1584 start.go:340] cluster config:
	{Name:download-only-406000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-406000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:37:43.813357    1584 iso.go:125] acquiring lock: {Name:mkc04c8f63d6315b912c6819d52840a9cdc59170 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:37:43.816613    1584 out.go:97] Starting "download-only-406000" primary control-plane node in "download-only-406000" cluster
	I0917 01:37:43.816620    1584 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 01:37:43.873916    1584 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 01:37:43.873939    1584 cache.go:56] Caching tarball of preloaded images
	I0917 01:37:43.874127    1584 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 01:37:43.878687    1584 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0917 01:37:43.878694    1584 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0917 01:37:43.954243    1584 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0917 01:37:51.990152    1584 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0917 01:37:51.990319    1584 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0917 01:37:52.511742    1584 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 01:37:52.511929    1584 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/download-only-406000/config.json ...
	I0917 01:37:52.511946    1584 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/download-only-406000/config.json: {Name:mk1dda156a1b344cbc521fa219259d9787f8c92a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:37:52.512494    1584 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 01:37:52.512645    1584 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19648-1056/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-406000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-406000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-406000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.39s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-969000 --alsologtostderr --binary-mirror http://127.0.0.1:49311 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-969000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-969000
--- PASS: TestBinaryMirror (0.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-401000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-401000: exit status 85 (59.241458ms)

                                                
                                                
-- stdout --
	* Profile "addons-401000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-401000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-401000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-401000: exit status 85 (55.131917ms)

                                                
                                                
-- stdout --
	* Profile "addons-401000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-401000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (203.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-401000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-401000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m23.184955292s)
--- PASS: TestAddons/Setup (203.19s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.23s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 7.206458ms
addons_test.go:905: volcano-admission stabilized in 7.290958ms
addons_test.go:913: volcano-controller stabilized in 7.338708ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-br8lf" [6d14e0ca-787a-4fa5-bfa4-0a6c77ff53e6] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004771041s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-jm4s7" [b6c59fcf-e37f-4e35-9967-6752025f310f] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003715334s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-7qsct" [cc3b1fc3-5337-4992-a886-50af578b43b3] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004046416s
addons_test.go:932: (dbg) Run:  kubectl --context addons-401000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-401000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-401000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d80ddef8-c9f4-47aa-9b4b-1604f3b56ade] Pending
helpers_test.go:344: "test-job-nginx-0" [d80ddef8-c9f4-47aa-9b4b-1604f3b56ade] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d80ddef8-c9f4-47aa-9b4b-1604f3b56ade] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.006159792s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-401000 addons disable volcano --alsologtostderr -v=1: (10.002321875s)
--- PASS: TestAddons/serial/Volcano (39.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-401000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-401000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-401000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-401000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-401000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a88f1840-721d-4d45-93d2-6703dcdd22ca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a88f1840-721d-4d45-93d2-6703dcdd22ca] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.011174709s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-401000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-401000 addons disable ingress --alsologtostderr -v=1: (7.248438542s)
--- PASS: TestAddons/parallel/Ingress (18.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xzdj9" [9d887d01-4d75-47ad-95ee-8d5fe4283628] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007084667s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-401000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-401000: (5.240695459s)
--- PASS: TestAddons/parallel/InspektorGadget (10.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.421375ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-9sskp" [8c247fc7-c700-4404-b3a6-e2031d9cc335] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009951s
addons_test.go:417: (dbg) Run:  kubectl --context addons-401000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.172625ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [89d512ca-d049-4b84-981b-1af63fcf8104] Pending
helpers_test.go:344: "task-pv-pod" [89d512ca-d049-4b84-981b-1af63fcf8104] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [89d512ca-d049-4b84-981b-1af63fcf8104] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.014738625s
addons_test.go:590: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-401000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-401000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-401000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-401000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8fd87030-2efd-4ce0-9dbc-740f76cac403] Pending
helpers_test.go:344: "task-pv-pod-restore" [8fd87030-2efd-4ce0-9dbc-740f76cac403] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8fd87030-2efd-4ce0-9dbc-740f76cac403] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005177791s
addons_test.go:632: (dbg) Run:  kubectl --context addons-401000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-401000 delete pod task-pv-pod-restore: (1.090917791s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-401000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-401000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-401000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.125590709s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-401000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-7k8zg" [4cecd049-12a9-4393-a8a3-72b08d93fd04] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-7k8zg" [4cecd049-12a9-4393-a8a3-72b08d93fd04] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.009330292s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-401000 addons disable headlamp --alsologtostderr -v=1: (5.279467708s)
--- PASS: TestAddons/parallel/Headlamp (16.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-tvrcb" [38e38a4e-932f-4997-b003-2026fff4e3a5] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004042708s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-401000
--- PASS: TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-401000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-401000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [83058deb-c586-47b5-a161-1d06456c39b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [83058deb-c586-47b5-a161-1d06456c39b4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [83058deb-c586-47b5-a161-1d06456c39b4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.006108958s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-401000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 ssh "cat /opt/local-path-provisioner/pvc-6ac7363b-2240-4d1e-b5b3-99cc58b807e2_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-401000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-401000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-401000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.472192166s)
--- PASS: TestAddons/parallel/LocalPath (40.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6qb27" [caa6a2fb-5902-4ec2-95de-42e47a1db59c] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.01179125s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-401000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.20s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-28k9k" [7ba81c0d-4dc6-42ef-9eeb-39189e1a9683] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.010430625s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-401000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-401000 addons disable yakd --alsologtostderr -v=1: (5.268620791s)
--- PASS: TestAddons/parallel/Yakd (10.28s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-401000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-401000: (12.231822084s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-401000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-401000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-401000
--- PASS: TestAddons/StoppedEnableDisable (12.42s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.24s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.24s)

                                                
                                    
x
+
TestErrorSpam/setup (34.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-437000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-437000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 --driver=qemu2 : (34.622293625s)
--- PASS: TestErrorSpam/setup (34.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 pause
--- PASS: TestErrorSpam/pause (0.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (55.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 stop: (3.198655208s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 stop: (26.037335083s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-437000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-437000 stop: (26.032910542s)
--- PASS: TestErrorSpam/stop (55.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19648-1056/.minikube/files/etc/test/nested/copy/1555/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-386000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-386000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m18.651412958s)
--- PASS: TestFunctional/serial/StartWithProxy (78.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-386000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-386000 --alsologtostderr -v=8: (38.254586s)
functional_test.go:663: soft start took 38.255090666s for "functional-386000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-386000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-386000 cache add registry.k8s.io/pause:3.1: (1.005108125s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local434175255/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 cache add minikube-local-cache-test:functional-386000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-386000 cache add minikube-local-cache-test:functional-386000: (1.025402834s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 cache delete minikube-local-cache-test:functional-386000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-386000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-386000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (72.465208ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 kubectl -- --context functional-386000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.83s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-386000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-386000 get pods: (1.010550333s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-386000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0917 01:56:18.091554    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:18.100005    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:18.113402    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:18.135567    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:18.178969    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:18.262330    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:18.425743    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:18.749376    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:19.393129    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:20.676247    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:23.239937    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:56:28.363618    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-386000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.885800458s)
functional_test.go:761: restart took 35.885885125s for "functional-386000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-386000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3930244415/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-386000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-386000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-386000: exit status 115 (141.754375ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30505 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-386000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-386000 config get cpus: exit status 14 (29.343ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-386000 config get cpus: exit status 14 (30.288708ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-386000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-386000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2381: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-386000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-386000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.5975ms)

                                                
                                                
-- stdout --
	* [functional-386000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:57:24.269317    2364 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:57:24.269482    2364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:57:24.269485    2364 out.go:358] Setting ErrFile to fd 2...
	I0917 01:57:24.269488    2364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:57:24.269645    2364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 01:57:24.270650    2364 out.go:352] Setting JSON to false
	I0917 01:57:24.288127    2364 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1614,"bootTime":1726561830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 01:57:24.288196    2364 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:57:24.292223    2364 out.go:177] * [functional-386000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0917 01:57:24.301109    2364 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 01:57:24.301168    2364 notify.go:220] Checking for updates...
	I0917 01:57:24.308990    2364 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 01:57:24.312086    2364 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 01:57:24.314954    2364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:57:24.318031    2364 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 01:57:24.321101    2364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:57:24.322622    2364 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 01:57:24.322886    2364 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:57:24.327083    2364 out.go:177] * Using the qemu2 driver based on existing profile
	I0917 01:57:24.333950    2364 start.go:297] selected driver: qemu2
	I0917 01:57:24.333958    2364 start.go:901] validating driver "qemu2" against &{Name:functional-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:57:24.334024    2364 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:57:24.340019    2364 out.go:201] 
	W0917 01:57:24.344123    2364 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 01:57:24.348017    2364 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-386000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-386000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-386000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.979042ms)

                                                
                                                
-- stdout --
	* [functional-386000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:57:24.496191    2375 out.go:345] Setting OutFile to fd 1 ...
	I0917 01:57:24.496309    2375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:57:24.496312    2375 out.go:358] Setting ErrFile to fd 2...
	I0917 01:57:24.496315    2375 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 01:57:24.496435    2375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
	I0917 01:57:24.497789    2375 out.go:352] Setting JSON to false
	I0917 01:57:24.514968    2375 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1614,"bootTime":1726561830,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0917 01:57:24.515048    2375 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 01:57:24.520098    2375 out.go:177] * [functional-386000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0917 01:57:24.529081    2375 notify.go:220] Checking for updates...
	I0917 01:57:24.533007    2375 out.go:177]   - MINIKUBE_LOCATION=19648
	I0917 01:57:24.536908    2375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	I0917 01:57:24.540042    2375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0917 01:57:24.543062    2375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:57:24.546096    2375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	I0917 01:57:24.549088    2375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:57:24.552336    2375 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 01:57:24.552628    2375 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 01:57:24.557038    2375 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0917 01:57:24.564018    2375 start.go:297] selected driver: qemu2
	I0917 01:57:24.564024    2375 start.go:901] validating driver "qemu2" against &{Name:functional-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19646/minikube-v1.34.0-1726415472-19646-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:57:24.564074    2375 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:57:24.570063    2375 out.go:201] 
	W0917 01:57:24.574013    2375 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 01:57:24.578100    2375 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a074e1e6-e89e-4530-a7df-ca00cef1591d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007328584s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-386000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-386000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-386000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-386000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e03e303e-d2a5-4e3e-a5c5-71d98fc328bd] Pending
helpers_test.go:344: "sp-pod" [e03e303e-d2a5-4e3e-a5c5-71d98fc328bd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e03e303e-d2a5-4e3e-a5c5-71d98fc328bd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.010893167s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-386000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-386000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-386000 delete -f testdata/storage-provisioner/pod.yaml: (1.0890385s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-386000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7b2c2475-0f35-43d0-b2f2-a64246e6d228] Pending
helpers_test.go:344: "sp-pod" [7b2c2475-0f35-43d0-b2f2-a64246e6d228] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7b2c2475-0f35-43d0-b2f2-a64246e6d228] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.014975541s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-386000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh -n functional-386000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 cp functional-386000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd11119394/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh -n functional-386000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh -n functional-386000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1555/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "sudo cat /etc/test/nested/copy/1555/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1555.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "sudo cat /etc/ssl/certs/1555.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1555.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "sudo cat /usr/share/ca-certificates/1555.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15552.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "sudo cat /etc/ssl/certs/15552.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15552.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "sudo cat /usr/share/ca-certificates/15552.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-386000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-386000 ssh "sudo systemctl is-active crio": exit status 1 (112.136417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-386000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-386000
docker.io/kicbase/echo-server:functional-386000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-386000 image ls --format short --alsologtostderr:
I0917 01:57:27.897246    2403 out.go:345] Setting OutFile to fd 1 ...
I0917 01:57:27.897411    2403 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:57:27.897415    2403 out.go:358] Setting ErrFile to fd 2...
I0917 01:57:27.897418    2403 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:57:27.897555    2403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
I0917 01:57:27.897997    2403 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:57:27.898054    2403 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:57:27.898902    2403 ssh_runner.go:195] Run: systemctl --version
I0917 01:57:27.898909    2403 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/functional-386000/id_rsa Username:docker}
I0917 01:57:27.926919    2403 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-386000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-386000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| localhost/my-image                          | functional-386000 | c78e2cbc1068d | 1.41MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/minikube-local-cache-test | functional-386000 | 5880fee2c3bea | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-386000 image ls --format table --alsologtostderr:
I0917 01:57:30.349240    2418 out.go:345] Setting OutFile to fd 1 ...
I0917 01:57:30.349395    2418 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:57:30.349402    2418 out.go:358] Setting ErrFile to fd 2...
I0917 01:57:30.349404    2418 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:57:30.349542    2418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
I0917 01:57:30.349993    2418 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:57:30.350058    2418 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:57:30.350911    2418 ssh_runner.go:195] Run: systemctl --version
I0917 01:57:30.350918    2418 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/functional-386000/id_rsa Username:docker}
I0917 01:57:30.380313    2418 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/09/17 01:57:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-386000 image ls --format json --alsologtostderr:
[{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf
392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-386000"],"size":"4780000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"5880fee2c3bea40aad5a1e4eb1f11f049953c8128c7dc14d296033bd6aea2d02","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-386000"],"size":"30"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.
k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"c78e2cbc1068dea334d311a92b5578f33dbaf74951d5e26801c436bc96878b1f","repoDigests":[],"repoTags":["localhost/my-image:functional-386000"],"size":"1410000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-386000 image ls --format json --alsologtostderr:
I0917 01:57:30.273823    2416 out.go:345] Setting OutFile to fd 1 ...
I0917 01:57:30.273986    2416 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:57:30.273990    2416 out.go:358] Setting ErrFile to fd 2...
I0917 01:57:30.273992    2416 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:57:30.274150    2416 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
I0917 01:57:30.274597    2416 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:57:30.274658    2416 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:57:30.275514    2416 ssh_runner.go:195] Run: systemctl --version
I0917 01:57:30.275525    2416 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/functional-386000/id_rsa Username:docker}
I0917 01:57:30.303257    2416 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-386000 image ls --format yaml --alsologtostderr:
- id: 5880fee2c3bea40aad5a1e4eb1f11f049953c8128c7dc14d296033bd6aea2d02
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-386000
size: "30"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-386000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-386000 image ls --format yaml --alsologtostderr:
I0917 01:57:27.974028    2405 out.go:345] Setting OutFile to fd 1 ...
I0917 01:57:27.974205    2405 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:57:27.974210    2405 out.go:358] Setting ErrFile to fd 2...
I0917 01:57:27.974212    2405 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:57:27.974338    2405 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
I0917 01:57:27.974768    2405 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:57:27.974836    2405 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:57:27.975802    2405 ssh_runner.go:195] Run: systemctl --version
I0917 01:57:27.975810    2405 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/functional-386000/id_rsa Username:docker}
I0917 01:57:28.004398    2405 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-386000 ssh pgrep buildkitd: exit status 1 (62.863291ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image build -t localhost/my-image:functional-386000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-386000 image build -t localhost/my-image:functional-386000 testdata/build --alsologtostderr: (2.083732958s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-386000 image build -t localhost/my-image:functional-386000 testdata/build --alsologtostderr:
I0917 01:57:28.116648    2409 out.go:345] Setting OutFile to fd 1 ...
I0917 01:57:28.116885    2409 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:57:28.116889    2409 out.go:358] Setting ErrFile to fd 2...
I0917 01:57:28.116891    2409 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 01:57:28.117022    2409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1056/.minikube/bin
I0917 01:57:28.117468    2409 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:57:28.118271    2409 config.go:182] Loaded profile config "functional-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 01:57:28.119164    2409 ssh_runner.go:195] Run: systemctl --version
I0917 01:57:28.119176    2409 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1056/.minikube/machines/functional-386000/id_rsa Username:docker}
I0917 01:57:28.147352    2409 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1146096819.tar
I0917 01:57:28.147429    2409 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 01:57:28.152913    2409 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1146096819.tar
I0917 01:57:28.154489    2409 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1146096819.tar: stat -c "%s %y" /var/lib/minikube/build/build.1146096819.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1146096819.tar': No such file or directory
I0917 01:57:28.154503    2409 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1146096819.tar --> /var/lib/minikube/build/build.1146096819.tar (3072 bytes)
I0917 01:57:28.163681    2409 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1146096819
I0917 01:57:28.167263    2409 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1146096819 -xf /var/lib/minikube/build/build.1146096819.tar
I0917 01:57:28.172705    2409 docker.go:360] Building image: /var/lib/minikube/build/build.1146096819
I0917 01:57:28.172794    2409 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-386000 /var/lib/minikube/build/build.1146096819
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:c78e2cbc1068dea334d311a92b5578f33dbaf74951d5e26801c436bc96878b1f done
#8 naming to localhost/my-image:functional-386000 done
#8 DONE 0.1s
I0917 01:57:30.148433    2409 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-386000 /var/lib/minikube/build/build.1146096819: (1.975623375s)
I0917 01:57:30.148511    2409 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1146096819
I0917 01:57:30.152550    2409 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1146096819.tar
I0917 01:57:30.155935    2409 build_images.go:217] Built localhost/my-image:functional-386000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1146096819.tar
I0917 01:57:30.155952    2409 build_images.go:133] succeeded building to: functional-386000
I0917 01:57:30.155955    2409 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.760282125s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-386000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-386000 docker-env) && out/minikube-darwin-arm64 status -p functional-386000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-386000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-386000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-386000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-knlnd" [0d12a9a9-f9a6-4216-9126-7424705637e1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-knlnd" [0d12a9a9-f9a6-4216-9126-7424705637e1] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.010275292s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image load --daemon kicbase/echo-server:functional-386000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image load --daemon kicbase/echo-server:functional-386000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
E0917 01:56:38.607122    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-386000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image load --daemon kicbase/echo-server:functional-386000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image save kicbase/echo-server:functional-386000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image rm kicbase/echo-server:functional-386000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-386000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 image save --daemon kicbase/echo-server:functional-386000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-386000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-386000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-386000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-386000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2231: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-386000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-386000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-386000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b9177655-fc0b-4fcd-bcb4-aa8e5ddaceec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b9177655-fc0b-4fcd-bcb4-aa8e5ddaceec] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.009220416s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 service list -o json
functional_test.go:1494: Took "86.406417ms" to run "out/minikube-darwin-arm64 -p functional-386000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31179
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31179
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-386000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.254.25 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-386000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "92.496625ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.696291ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "88.157125ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.477291ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port113827220/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726563436216036000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port113827220/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726563436216036000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port113827220/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726563436216036000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port113827220/001/test-1726563436216036000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.615834ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 08:57 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 08:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 08:57 test-1726563436216036000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh cat /mount-9p/test-1726563436216036000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-386000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fae33927-0489-4190-ad87-5d93c7f159a3] Pending
helpers_test.go:344: "busybox-mount" [fae33927-0489-4190-ad87-5d93c7f159a3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fae33927-0489-4190-ad87-5d93c7f159a3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fae33927-0489-4190-ad87-5d93c7f159a3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.010761709s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-386000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port113827220/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3158075812/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (68.84975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3158075812/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-386000 ssh "sudo umount -f /mount-9p": exit status 1 (65.321417ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-386000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3158075812/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2244190414/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2244190414/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2244190414/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-darwin-arm64 -p functional-386000 ssh "findmnt -T" /mount1: (1.372756667s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-386000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-386000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2244190414/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2244190414/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-386000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2244190414/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-386000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-386000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-386000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (177s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-753000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0917 01:57:40.054067    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 01:59:01.977573    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-753000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m56.818101917s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (177.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-753000 -- rollout status deployment/busybox: (3.4163805s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-7mh8r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-m2lkq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-xllmp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-7mh8r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-m2lkq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-xllmp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-7mh8r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-m2lkq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-xllmp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-7mh8r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-7mh8r -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-m2lkq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-m2lkq -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-xllmp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-753000 -- exec busybox-7dff88458-xllmp -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-753000 -v=7 --alsologtostderr
E0917 02:01:18.090931    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-753000 -v=7 --alsologtostderr: (53.027947s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-753000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp testdata/cp-test.txt ha-753000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2674080439/001/cp-test_ha-753000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000:/home/docker/cp-test.txt ha-753000-m02:/home/docker/cp-test_ha-753000_ha-753000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m02 "sudo cat /home/docker/cp-test_ha-753000_ha-753000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000:/home/docker/cp-test.txt ha-753000-m03:/home/docker/cp-test_ha-753000_ha-753000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m03 "sudo cat /home/docker/cp-test_ha-753000_ha-753000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000:/home/docker/cp-test.txt ha-753000-m04:/home/docker/cp-test_ha-753000_ha-753000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m04 "sudo cat /home/docker/cp-test_ha-753000_ha-753000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp testdata/cp-test.txt ha-753000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2674080439/001/cp-test_ha-753000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m02:/home/docker/cp-test.txt ha-753000:/home/docker/cp-test_ha-753000-m02_ha-753000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000 "sudo cat /home/docker/cp-test_ha-753000-m02_ha-753000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m02:/home/docker/cp-test.txt ha-753000-m03:/home/docker/cp-test_ha-753000-m02_ha-753000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m03 "sudo cat /home/docker/cp-test_ha-753000-m02_ha-753000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m02:/home/docker/cp-test.txt ha-753000-m04:/home/docker/cp-test_ha-753000-m02_ha-753000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m04 "sudo cat /home/docker/cp-test_ha-753000-m02_ha-753000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp testdata/cp-test.txt ha-753000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2674080439/001/cp-test_ha-753000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m03:/home/docker/cp-test.txt ha-753000:/home/docker/cp-test_ha-753000-m03_ha-753000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000 "sudo cat /home/docker/cp-test_ha-753000-m03_ha-753000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m03:/home/docker/cp-test.txt ha-753000-m02:/home/docker/cp-test_ha-753000-m03_ha-753000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m02 "sudo cat /home/docker/cp-test_ha-753000-m03_ha-753000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m03:/home/docker/cp-test.txt ha-753000-m04:/home/docker/cp-test_ha-753000-m03_ha-753000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m04 "sudo cat /home/docker/cp-test_ha-753000-m03_ha-753000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp testdata/cp-test.txt ha-753000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2674080439/001/cp-test_ha-753000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m04:/home/docker/cp-test.txt ha-753000:/home/docker/cp-test_ha-753000-m04_ha-753000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000 "sudo cat /home/docker/cp-test_ha-753000-m04_ha-753000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m04:/home/docker/cp-test.txt ha-753000-m02:/home/docker/cp-test_ha-753000-m04_ha-753000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m02 "sudo cat /home/docker/cp-test_ha-753000-m04_ha-753000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 cp ha-753000-m04:/home/docker/cp-test.txt ha-753000-m03:/home/docker/cp-test_ha-753000-m04_ha-753000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-753000 ssh -n ha-753000-m03 "sudo cat /home/docker/cp-test_ha-753000-m04_ha-753000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0917 02:16:18.069084    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/addons-401000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:16:36.423120    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
E0917 02:17:59.512955    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.082807s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-570000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-570000 --output=json --user=testUser: (1.932055958s)
--- PASS: TestJSONOutput/stop/Command (1.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-148000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-148000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.801084ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"05cbfa6e-5fb0-418f-853d-4707763d2ad3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-148000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"09d2a7aa-03dc-4452-b11f-785bccc26894","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"ddd7517a-0a2f-4d35-b00e-874cd11ff483","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig"}}
	{"specversion":"1.0","id":"e56c8f59-3bf4-46ec-935c-725a2ea2f5e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"8e54a5da-4273-470e-974e-a5ef26e0388c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1a8b2287-cbbd-42c5-8279-9b609b9b5102","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube"}}
	{"specversion":"1.0","id":"8dcd0303-0f51-4d57-bd46-4a41eb18cf97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0a06dead-6a05-4eb3-b341-e4ea8e6ab42e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-148000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-148000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-376000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-376000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.776083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-376000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-376000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-376000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.750667ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-376000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-376000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.645409375s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.669850416s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-376000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-376000: (3.694322292s)
--- PASS: TestNoKubernetes/serial/Stop (3.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-376000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-376000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.973ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-376000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-376000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-288000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-336000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-336000 --alsologtostderr -v=3: (3.2939575s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-336000 -n old-k8s-version-336000: exit status 7 (45.710458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-336000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-105000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-105000 --alsologtostderr -v=3: (2.085530458s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-105000 -n no-preload-105000: exit status 7 (59.122125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-105000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-347000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-347000 --alsologtostderr -v=3: (2.872621209s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-347000 -n embed-certs-347000: exit status 7 (57.0475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-347000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-832000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-832000 --alsologtostderr -v=3: (3.760962708s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (60.322291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-832000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-371000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-371000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-371000 --alsologtostderr -v=3: (2.944264708s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-371000 -n newest-cni-371000: exit status 7 (62.500708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-371000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/270)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0917 02:31:36.418565    1555 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1056/.minikube/profiles/functional-386000/client.crt: no such file or directory" logger="UnhandledError"
panic.go:629: 
----------------------- debugLogs start: cilium-688000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-688000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-688000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-688000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-688000"

                                                
                                                
----------------------- debugLogs end: cilium-688000 [took: 2.341911416s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-688000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-688000
--- SKIP: TestNetworkPlugins/group/cilium (2.45s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-802000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-802000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard