Test Report: QEMU_macOS 19696

                    
                      60137f5eb61dd17472aeb1c9d9b63bd7ae7f04e6:2024-09-23:36347
                    
                

Test fail (99/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.27
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.29
22 TestOffline 10.07
33 TestAddons/parallel/Registry 71.3
45 TestCertOptions 10.15
46 TestCertExpiration 195.44
47 TestDockerFlags 10.11
48 TestForceSystemdFlag 10.17
49 TestForceSystemdEnv 10.78
94 TestFunctional/parallel/ServiceCmdConnect 32.02
166 TestMultiControlPlane/serial/StopSecondaryNode 64.13
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 51.94
168 TestMultiControlPlane/serial/RestartSecondaryNode 82.99
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.37
171 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
173 TestMultiControlPlane/serial/StopCluster 202.07
174 TestMultiControlPlane/serial/RestartCluster 5.25
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
176 TestMultiControlPlane/serial/AddSecondaryNode 0.07
180 TestImageBuild/serial/Setup 10.12
183 TestJSONOutput/start/Command 10.15
189 TestJSONOutput/pause/Command 0.08
195 TestJSONOutput/unpause/Command 0.04
212 TestMinikubeProfile 10.17
215 TestMountStart/serial/StartWithMountFirst 10.22
218 TestMultiNode/serial/FreshStart2Nodes 10.01
219 TestMultiNode/serial/DeployApp2Nodes 99.15
220 TestMultiNode/serial/PingHostFrom2Pods 0.09
221 TestMultiNode/serial/AddNode 0.07
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.08
224 TestMultiNode/serial/CopyFile 0.06
225 TestMultiNode/serial/StopNode 0.14
226 TestMultiNode/serial/StartAfterStop 37.57
227 TestMultiNode/serial/RestartKeepsNodes 8.87
228 TestMultiNode/serial/DeleteNode 0.1
229 TestMultiNode/serial/StopMultiNode 3.18
230 TestMultiNode/serial/RestartMultiNode 5.25
231 TestMultiNode/serial/ValidateNameConflict 20.28
235 TestPreload 10.14
237 TestScheduledStopUnix 9.99
238 TestSkaffold 12.45
241 TestRunningBinaryUpgrade 603.46
243 TestKubernetesUpgrade 18.12
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.46
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.2
259 TestStoppedBinaryUpgrade/Upgrade 573.41
261 TestPause/serial/Start 9.98
271 TestNoKubernetes/serial/StartWithK8s 9.86
272 TestNoKubernetes/serial/StartWithStopK8s 5.31
273 TestNoKubernetes/serial/Start 5.3
277 TestNoKubernetes/serial/StartNoArgs 5.33
279 TestNetworkPlugins/group/auto/Start 9.99
280 TestNetworkPlugins/group/kindnet/Start 10.03
281 TestNetworkPlugins/group/calico/Start 9.95
282 TestNetworkPlugins/group/custom-flannel/Start 10
283 TestNetworkPlugins/group/false/Start 9.76
284 TestNetworkPlugins/group/enable-default-cni/Start 9.79
285 TestNetworkPlugins/group/flannel/Start 9.86
286 TestNetworkPlugins/group/bridge/Start 9.82
287 TestNetworkPlugins/group/kubenet/Start 9.76
290 TestStartStop/group/old-k8s-version/serial/FirstStart 10.01
291 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
295 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
299 TestStartStop/group/old-k8s-version/serial/Pause 0.1
301 TestStartStop/group/no-preload/serial/FirstStart 10.02
303 TestStartStop/group/embed-certs/serial/FirstStart 10.02
304 TestStartStop/group/no-preload/serial/DeployApp 0.09
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
308 TestStartStop/group/no-preload/serial/SecondStart 6.7
309 TestStartStop/group/embed-certs/serial/DeployApp 0.09
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
313 TestStartStop/group/embed-certs/serial/SecondStart 5.27
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
317 TestStartStop/group/no-preload/serial/Pause 0.1
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.94
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
322 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
323 TestStartStop/group/embed-certs/serial/Pause 0.1
325 TestStartStop/group/newest-cni/serial/FirstStart 9.92
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
335 TestStartStop/group/newest-cni/serial/SecondStart 5.25
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
343 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (17.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-711000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-711000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (17.273623292s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6b283ef2-823d-4b4e-bb43-38d51f5e009b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-711000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b7cc3357-1646-40ce-9b2b-c821d7516374","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19696"}}
	{"specversion":"1.0","id":"08acdc8f-aeab-4b3f-83d9-b82c2631d91f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig"}}
	{"specversion":"1.0","id":"04fc5143-be95-41fa-b351-988af4ecb11a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"46361b2d-2225-4658-8b69-4a85a22037c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a28b82b9-b1a6-464d-a964-deeaf9f3ed80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube"}}
	{"specversion":"1.0","id":"69449cf9-cc64-44fe-9c1a-80f7206b701f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"9a566738-99ac-4eae-a09b-843b2d1dc7fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbd67303-3f9e-4f47-b1b9-9fa525fca1ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"45fe22d2-5f65-43e0-8a36-13b8236356e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6c668e6-f5df-41ab-8f64-1fd1d9b18555","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-711000\" primary control-plane node in \"download-only-711000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a4fd94c-4f93-4ebf-a844-d314a950de6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8063b38e-e7d6-4117-8080-fda135edde31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0] Decompressors:map[bz2:0x14000120da0 gz:0x14000120da8 tar:0x14000120ce0 tar.bz2:0x14000120d10 tar.gz:0x14000120d40 tar.xz:0x14000120d50 tar.zst:0x14000120d60 tbz2:0x14000120d10 tgz:0x14
000120d40 txz:0x14000120d50 tzst:0x14000120d60 xz:0x14000120dc0 zip:0x14000120dd0 zst:0x14000120dc8] Getters:map[file:0x14000714840 http:0x140006d2410 https:0x140006d2690] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"13da3d51-2561-420b-b345-6d928c66f16e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 16:36:46.170083    1600 out.go:345] Setting OutFile to fd 1 ...
	I0923 16:36:46.170239    1600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:36:46.170242    1600 out.go:358] Setting ErrFile to fd 2...
	I0923 16:36:46.170244    1600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:36:46.170360    1600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	W0923 16:36:46.170447    1600 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19696-1109/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19696-1109/.minikube/config/config.json: no such file or directory
	I0923 16:36:46.171701    1600 out.go:352] Setting JSON to true
	I0923 16:36:46.190315    1600 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":369,"bootTime":1727134237,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 16:36:46.190422    1600 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 16:36:46.195638    1600 out.go:97] [download-only-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 16:36:46.195781    1600 notify.go:220] Checking for updates...
	W0923 16:36:46.195802    1600 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 16:36:46.199578    1600 out.go:169] MINIKUBE_LOCATION=19696
	I0923 16:36:46.201201    1600 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 16:36:46.205670    1600 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 16:36:46.208714    1600 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 16:36:46.211603    1600 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	W0923 16:36:46.217651    1600 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 16:36:46.217876    1600 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 16:36:46.221503    1600 out.go:97] Using the qemu2 driver based on user configuration
	I0923 16:36:46.221522    1600 start.go:297] selected driver: qemu2
	I0923 16:36:46.221525    1600 start.go:901] validating driver "qemu2" against <nil>
	I0923 16:36:46.221591    1600 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 16:36:46.224677    1600 out.go:169] Automatically selected the socket_vmnet network
	I0923 16:36:46.230776    1600 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0923 16:36:46.230873    1600 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 16:36:46.230931    1600 cni.go:84] Creating CNI manager for ""
	I0923 16:36:46.230972    1600 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 16:36:46.231025    1600 start.go:340] cluster config:
	{Name:download-only-711000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 16:36:46.235887    1600 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 16:36:46.238541    1600 out.go:97] Downloading VM boot image ...
	I0923 16:36:46.238555    1600 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0923 16:36:55.471437    1600 out.go:97] Starting "download-only-711000" primary control-plane node in "download-only-711000" cluster
	I0923 16:36:55.471464    1600 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 16:36:55.528109    1600 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 16:36:55.528117    1600 cache.go:56] Caching tarball of preloaded images
	I0923 16:36:55.528284    1600 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 16:36:55.533420    1600 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 16:36:55.533427    1600 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 16:36:55.613043    1600 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 16:37:02.062404    1600 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 16:37:02.062574    1600 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 16:37:02.757773    1600 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 16:37:02.757978    1600 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/download-only-711000/config.json ...
	I0923 16:37:02.757994    1600 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/download-only-711000/config.json: {Name:mk62623163fd2442f60858d058e4f341b8f3d648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:02.758242    1600 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 16:37:02.758466    1600 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0923 16:37:03.362385    1600 out.go:193] 
	W0923 16:37:03.371479    1600 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0] Decompressors:map[bz2:0x14000120da0 gz:0x14000120da8 tar:0x14000120ce0 tar.bz2:0x14000120d10 tar.gz:0x14000120d40 tar.xz:0x14000120d50 tar.zst:0x14000120d60 tbz2:0x14000120d10 tgz:0x14000120d40 txz:0x14000120d50 tzst:0x14000120d60 xz:0x14000120dc0 zip:0x14000120dd0 zst:0x14000120dc8] Getters:map[file:0x14000714840 http:0x140006d2410 https:0x140006d2690] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0923 16:37:03.371519    1600 out_reason.go:110] 
	W0923 16:37:03.379377    1600 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 16:37:03.383298    1600 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-711000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (17.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 16:37:18.895154    1596 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-666000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-666000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 : exit status 40 (190.4445ms)

                                                
                                                
-- stdout --
	* [binary-mirror-666000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-666000" primary control-plane node in "binary-mirror-666000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 16:37:18.955831    1675 out.go:345] Setting OutFile to fd 1 ...
	I0923 16:37:18.955992    1675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:37:18.955996    1675 out.go:358] Setting ErrFile to fd 2...
	I0923 16:37:18.955998    1675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:37:18.956133    1675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 16:37:18.957360    1675 out.go:352] Setting JSON to false
	I0923 16:37:18.975989    1675 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":401,"bootTime":1727134237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 16:37:18.976061    1675 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 16:37:18.980057    1675 out.go:177] * [binary-mirror-666000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 16:37:18.987938    1675 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 16:37:18.987996    1675 notify.go:220] Checking for updates...
	I0923 16:37:18.994875    1675 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 16:37:18.997964    1675 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 16:37:19.001016    1675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 16:37:19.003926    1675 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 16:37:19.007089    1675 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 16:37:19.010977    1675 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 16:37:19.017999    1675 start.go:297] selected driver: qemu2
	I0923 16:37:19.018008    1675 start.go:901] validating driver "qemu2" against <nil>
	I0923 16:37:19.018072    1675 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 16:37:19.021002    1675 out.go:177] * Automatically selected the socket_vmnet network
	I0923 16:37:19.026230    1675 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0923 16:37:19.026324    1675 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 16:37:19.026348    1675 cni.go:84] Creating CNI manager for ""
	I0923 16:37:19.026375    1675 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 16:37:19.026382    1675 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 16:37:19.026431    1675 start.go:340] cluster config:
	{Name:binary-mirror-666000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-666000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49312 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 16:37:19.030394    1675 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 16:37:19.038831    1675 out.go:177] * Starting "binary-mirror-666000" primary control-plane node in "binary-mirror-666000" cluster
	I0923 16:37:19.042905    1675 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 16:37:19.042953    1675 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 16:37:19.042966    1675 cache.go:56] Caching tarball of preloaded images
	I0923 16:37:19.043070    1675 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 16:37:19.043077    1675 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 16:37:19.043267    1675 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/binary-mirror-666000/config.json ...
	I0923 16:37:19.043279    1675 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/binary-mirror-666000/config.json: {Name:mk9f9dc3d6c6dcd2f92543c63ab2558a322b33ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:19.043627    1675 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 16:37:19.043681    1675 download.go:107] Downloading: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0923 16:37:19.092944    1675 out.go:201] 
	W0923 16:37:19.096892    1675 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1069e96c0 0x1069e96c0 0x1069e96c0 0x1069e96c0 0x1069e96c0 0x1069e96c0 0x1069e96c0] Decompressors:map[bz2:0x14000527cf0 gz:0x14000527cf8 tar:0x14000527c80 tar.bz2:0x14000527c90 tar.gz:0x14000527cb0 tar.xz:0x14000527cc0 tar.zst:0x14000527ce0 tbz2:0x14000527c90 tgz:0x14000527cb0 txz:0x14000527cc0 tzst:0x14000527ce0 xz:0x14000527d00 zip:0x14000527d10 zst:0x14000527d08] Getters:map[file:0x14000781d90 http:0x1400052e230 https:0x1400052e280] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1069e96c0 0x1069e96c0 0x1069e96c0 0x1069e96c0 0x1069e96c0 0x1069e96c0 0x1069e96c0] Decompressors:map[bz2:0x14000527cf0 gz:0x14000527cf8 tar:0x14000527c80 tar.bz2:0x14000527c90 tar.gz:0x14000527cb0 tar.xz:0x14000527cc0 tar.zst:0x14000527ce0 tbz2:0x14000527c90 tgz:0x14000527cb0 txz:0x14000527cc0 tzst:0x14000527ce0 xz:0x14000527d00 zip:0x14000527d10 zst:0x14000527d08] Getters:map[file:0x14000781d90 http:0x1400052e230 https:0x1400052e280] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0923 16:37:19.096898    1675 out.go:270] * 
	* 
	W0923 16:37:19.097344    1675 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 16:37:19.111899    1675 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-666000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49312" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-666000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-666000
--- FAIL: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestOffline (10.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-754000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-754000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.912112666s)

                                                
                                                
-- stdout --
	* [offline-docker-754000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-754000" primary control-plane node in "offline-docker-754000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-754000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:15:35.012365    4081 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:15:35.012532    4081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:15:35.012535    4081 out.go:358] Setting ErrFile to fd 2...
	I0923 17:15:35.012538    4081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:15:35.012683    4081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:15:35.013974    4081 out.go:352] Setting JSON to false
	I0923 17:15:35.031470    4081 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2698,"bootTime":1727134237,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:15:35.031544    4081 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:15:35.036079    4081 out.go:177] * [offline-docker-754000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:15:35.043920    4081 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:15:35.043957    4081 notify.go:220] Checking for updates...
	I0923 17:15:35.050967    4081 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:15:35.053948    4081 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:15:35.056960    4081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:15:35.059896    4081 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:15:35.062928    4081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:15:35.066313    4081 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:15:35.066363    4081 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:15:35.069874    4081 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:15:35.076944    4081 start.go:297] selected driver: qemu2
	I0923 17:15:35.076954    4081 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:15:35.076962    4081 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:15:35.078767    4081 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:15:35.081928    4081 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:15:35.084977    4081 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:15:35.084996    4081 cni.go:84] Creating CNI manager for ""
	I0923 17:15:35.085023    4081 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:15:35.085027    4081 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:15:35.085067    4081 start.go:340] cluster config:
	{Name:offline-docker-754000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-754000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:15:35.088678    4081 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:35.095919    4081 out.go:177] * Starting "offline-docker-754000" primary control-plane node in "offline-docker-754000" cluster
	I0923 17:15:35.099915    4081 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:15:35.099949    4081 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:15:35.099958    4081 cache.go:56] Caching tarball of preloaded images
	I0923 17:15:35.100055    4081 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:15:35.100066    4081 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:15:35.100133    4081 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/offline-docker-754000/config.json ...
	I0923 17:15:35.100144    4081 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/offline-docker-754000/config.json: {Name:mkaf61fc4590780ff648dfb4d86464b8ac9d308d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:15:35.100465    4081 start.go:360] acquireMachinesLock for offline-docker-754000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:15:35.100505    4081 start.go:364] duration metric: took 32.958µs to acquireMachinesLock for "offline-docker-754000"
	I0923 17:15:35.100518    4081 start.go:93] Provisioning new machine with config: &{Name:offline-docker-754000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-754000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:15:35.100554    4081 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:15:35.104902    4081 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 17:15:35.120709    4081 start.go:159] libmachine.API.Create for "offline-docker-754000" (driver="qemu2")
	I0923 17:15:35.120762    4081 client.go:168] LocalClient.Create starting
	I0923 17:15:35.120852    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:15:35.120892    4081 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:35.120907    4081 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:35.120958    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:15:35.120982    4081 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:35.120993    4081 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:35.121441    4081 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:15:35.283219    4081 main.go:141] libmachine: Creating SSH key...
	I0923 17:15:35.430040    4081 main.go:141] libmachine: Creating Disk image...
	I0923 17:15:35.430050    4081 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:15:35.430282    4081 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2
	I0923 17:15:35.440175    4081 main.go:141] libmachine: STDOUT: 
	I0923 17:15:35.440199    4081 main.go:141] libmachine: STDERR: 
	I0923 17:15:35.440286    4081 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2 +20000M
	I0923 17:15:35.449117    4081 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:15:35.449142    4081 main.go:141] libmachine: STDERR: 
	I0923 17:15:35.449162    4081 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2
	I0923 17:15:35.449169    4081 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:15:35.449180    4081 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:15:35.449212    4081 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:c1:a2:76:33:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2
	I0923 17:15:35.450954    4081 main.go:141] libmachine: STDOUT: 
	I0923 17:15:35.450972    4081 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:15:35.450993    4081 client.go:171] duration metric: took 330.226542ms to LocalClient.Create
	I0923 17:15:37.453015    4081 start.go:128] duration metric: took 2.352527042s to createHost
	I0923 17:15:37.453052    4081 start.go:83] releasing machines lock for "offline-docker-754000", held for 2.35261875s
	W0923 17:15:37.453081    4081 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:37.463947    4081 out.go:177] * Deleting "offline-docker-754000" in qemu2 ...
	W0923 17:15:37.478609    4081 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:37.478622    4081 start.go:729] Will try again in 5 seconds ...
	I0923 17:15:42.480776    4081 start.go:360] acquireMachinesLock for offline-docker-754000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:15:42.481244    4081 start.go:364] duration metric: took 358.958µs to acquireMachinesLock for "offline-docker-754000"
	I0923 17:15:42.481387    4081 start.go:93] Provisioning new machine with config: &{Name:offline-docker-754000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-754000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:15:42.481720    4081 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:15:42.501260    4081 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 17:15:42.556037    4081 start.go:159] libmachine.API.Create for "offline-docker-754000" (driver="qemu2")
	I0923 17:15:42.556098    4081 client.go:168] LocalClient.Create starting
	I0923 17:15:42.556225    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:15:42.556303    4081 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:42.556323    4081 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:42.556388    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:15:42.556434    4081 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:42.556449    4081 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:42.557057    4081 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:15:42.728797    4081 main.go:141] libmachine: Creating SSH key...
	I0923 17:15:42.822548    4081 main.go:141] libmachine: Creating Disk image...
	I0923 17:15:42.822554    4081 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:15:42.822774    4081 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2
	I0923 17:15:42.831973    4081 main.go:141] libmachine: STDOUT: 
	I0923 17:15:42.831991    4081 main.go:141] libmachine: STDERR: 
	I0923 17:15:42.832047    4081 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2 +20000M
	I0923 17:15:42.839984    4081 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:15:42.839999    4081 main.go:141] libmachine: STDERR: 
	I0923 17:15:42.840017    4081 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2
	I0923 17:15:42.840023    4081 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:15:42.840032    4081 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:15:42.840061    4081 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:a2:32:f7:88:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/offline-docker-754000/disk.qcow2
	I0923 17:15:42.841616    4081 main.go:141] libmachine: STDOUT: 
	I0923 17:15:42.841629    4081 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:15:42.841642    4081 client.go:171] duration metric: took 285.542417ms to LocalClient.Create
	I0923 17:15:44.843791    4081 start.go:128] duration metric: took 2.362100708s to createHost
	I0923 17:15:44.843887    4081 start.go:83] releasing machines lock for "offline-docker-754000", held for 2.362695375s
	W0923 17:15:44.844245    4081 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-754000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-754000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:44.860791    4081 out.go:201] 
	W0923 17:15:44.865900    4081 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:15:44.865929    4081 out.go:270] * 
	* 
	W0923 17:15:44.868738    4081 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:15:44.881805    4081 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-754000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-23 17:15:44.895959 -0700 PDT m=+2338.927473668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-754000 -n offline-docker-754000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-754000 -n offline-docker-754000: exit status 7 (69.984875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-754000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-754000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-754000
--- FAIL: TestOffline (10.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.512667ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-h9ld7" [957ab26e-5223-48ff-90ce-62f677de8be0] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0047325s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4znqx" [55be3d2c-a04d-4e79-ae58-eabab8942dc0] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008771375s
addons_test.go:338: (dbg) Run:  kubectl --context addons-938000 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-938000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-938000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.068378375s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-938000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 ip
2024/09/23 16:50:29 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-938000 -n addons-938000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-711000 | jenkins | v1.34.0 | 23 Sep 24 16:36 PDT |                     |
	|         | -p download-only-711000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT | 23 Sep 24 16:37 PDT |
	| delete  | -p download-only-711000              | download-only-711000 | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT | 23 Sep 24 16:37 PDT |
	| start   | -o=json --download-only              | download-only-940000 | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT |                     |
	|         | -p download-only-940000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT | 23 Sep 24 16:37 PDT |
	| delete  | -p download-only-940000              | download-only-940000 | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT | 23 Sep 24 16:37 PDT |
	| delete  | -p download-only-711000              | download-only-711000 | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT | 23 Sep 24 16:37 PDT |
	| delete  | -p download-only-940000              | download-only-940000 | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT | 23 Sep 24 16:37 PDT |
	| start   | --download-only -p                   | binary-mirror-666000 | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT |                     |
	|         | binary-mirror-666000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-666000              | binary-mirror-666000 | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT | 23 Sep 24 16:37 PDT |
	| addons  | enable dashboard -p                  | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT |                     |
	|         | addons-938000                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT |                     |
	|         | addons-938000                        |                      |         |         |                     |                     |
	| start   | -p addons-938000 --wait=true         | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT | 23 Sep 24 16:40 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-938000 addons disable         | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:41 PDT | 23 Sep 24 16:41 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:49 PDT | 23 Sep 24 16:49 PDT |
	|         | -p addons-938000                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-938000 addons disable         | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:49 PDT | 23 Sep 24 16:49 PDT |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-938000 addons                 | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:50 PDT | 23 Sep 24 16:50 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-938000 addons                 | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:50 PDT | 23 Sep 24 16:50 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-938000 addons                 | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:50 PDT | 23 Sep 24 16:50 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:50 PDT |                     |
	|         | addons-938000                        |                      |         |         |                     |                     |
	| ip      | addons-938000 ip                     | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:50 PDT | 23 Sep 24 16:50 PDT |
	| addons  | addons-938000 addons disable         | addons-938000        | jenkins | v1.34.0 | 23 Sep 24 16:50 PDT | 23 Sep 24 16:50 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 16:37:19
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 16:37:19.277972    1689 out.go:345] Setting OutFile to fd 1 ...
	I0923 16:37:19.278101    1689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:37:19.278105    1689 out.go:358] Setting ErrFile to fd 2...
	I0923 16:37:19.278107    1689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:37:19.278241    1689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 16:37:19.279285    1689 out.go:352] Setting JSON to false
	I0923 16:37:19.296440    1689 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":402,"bootTime":1727134237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 16:37:19.296504    1689 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 16:37:19.300781    1689 out.go:177] * [addons-938000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 16:37:19.307968    1689 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 16:37:19.308026    1689 notify.go:220] Checking for updates...
	I0923 16:37:19.314923    1689 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 16:37:19.317932    1689 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 16:37:19.320973    1689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 16:37:19.323907    1689 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 16:37:19.326961    1689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 16:37:19.330090    1689 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 16:37:19.333888    1689 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 16:37:19.341999    1689 start.go:297] selected driver: qemu2
	I0923 16:37:19.342004    1689 start.go:901] validating driver "qemu2" against <nil>
	I0923 16:37:19.342009    1689 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 16:37:19.344274    1689 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 16:37:19.346898    1689 out.go:177] * Automatically selected the socket_vmnet network
	I0923 16:37:19.349967    1689 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 16:37:19.349987    1689 cni.go:84] Creating CNI manager for ""
	I0923 16:37:19.350010    1689 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 16:37:19.350015    1689 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 16:37:19.350047    1689 start.go:340] cluster config:
	{Name:addons-938000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-938000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 16:37:19.353877    1689 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 16:37:19.361929    1689 out.go:177] * Starting "addons-938000" primary control-plane node in "addons-938000" cluster
	I0923 16:37:19.365890    1689 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 16:37:19.365904    1689 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 16:37:19.365910    1689 cache.go:56] Caching tarball of preloaded images
	I0923 16:37:19.365974    1689 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 16:37:19.365981    1689 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 16:37:19.366183    1689 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/config.json ...
	I0923 16:37:19.366195    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/config.json: {Name:mk4430b9110bd1e2aa8b169b473e3c728edbf470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:19.366645    1689 start.go:360] acquireMachinesLock for addons-938000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 16:37:19.366724    1689 start.go:364] duration metric: took 72.667µs to acquireMachinesLock for "addons-938000"
	I0923 16:37:19.366740    1689 start.go:93] Provisioning new machine with config: &{Name:addons-938000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-938000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 16:37:19.366766    1689 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 16:37:19.374924    1689 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 16:37:19.613318    1689 start.go:159] libmachine.API.Create for "addons-938000" (driver="qemu2")
	I0923 16:37:19.613369    1689 client.go:168] LocalClient.Create starting
	I0923 16:37:19.613533    1689 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 16:37:19.655678    1689 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 16:37:19.750678    1689 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 16:37:20.625807    1689 main.go:141] libmachine: Creating SSH key...
	I0923 16:37:20.709012    1689 main.go:141] libmachine: Creating Disk image...
	I0923 16:37:20.709018    1689 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 16:37:20.710842    1689 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/disk.qcow2
	I0923 16:37:20.730409    1689 main.go:141] libmachine: STDOUT: 
	I0923 16:37:20.730429    1689 main.go:141] libmachine: STDERR: 
	I0923 16:37:20.730494    1689 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/disk.qcow2 +20000M
	I0923 16:37:20.738817    1689 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 16:37:20.738831    1689 main.go:141] libmachine: STDERR: 
	I0923 16:37:20.738843    1689 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/disk.qcow2
	I0923 16:37:20.738851    1689 main.go:141] libmachine: Starting QEMU VM...
	I0923 16:37:20.738888    1689 qemu.go:418] Using hvf for hardware acceleration
	I0923 16:37:20.738916    1689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:de:e6:4c:40:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/disk.qcow2
	I0923 16:37:20.795839    1689 main.go:141] libmachine: STDOUT: 
	I0923 16:37:20.795863    1689 main.go:141] libmachine: STDERR: 
	I0923 16:37:20.795867    1689 main.go:141] libmachine: Attempt 0
	I0923 16:37:20.795879    1689 main.go:141] libmachine: Searching for 26:de:e6:4c:40:74 in /var/db/dhcpd_leases ...
	I0923 16:37:20.795955    1689 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0923 16:37:20.795976    1689 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f34d01}
	I0923 16:37:22.798097    1689 main.go:141] libmachine: Attempt 1
	I0923 16:37:22.798184    1689 main.go:141] libmachine: Searching for 26:de:e6:4c:40:74 in /var/db/dhcpd_leases ...
	I0923 16:37:22.798537    1689 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0923 16:37:22.798586    1689 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f34d01}
	I0923 16:37:24.800864    1689 main.go:141] libmachine: Attempt 2
	I0923 16:37:24.801049    1689 main.go:141] libmachine: Searching for 26:de:e6:4c:40:74 in /var/db/dhcpd_leases ...
	I0923 16:37:24.801218    1689 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0923 16:37:24.801268    1689 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f34d01}
	I0923 16:37:26.803410    1689 main.go:141] libmachine: Attempt 3
	I0923 16:37:26.803435    1689 main.go:141] libmachine: Searching for 26:de:e6:4c:40:74 in /var/db/dhcpd_leases ...
	I0923 16:37:26.803501    1689 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0923 16:37:26.803521    1689 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f34d01}
	I0923 16:37:28.805524    1689 main.go:141] libmachine: Attempt 4
	I0923 16:37:28.805533    1689 main.go:141] libmachine: Searching for 26:de:e6:4c:40:74 in /var/db/dhcpd_leases ...
	I0923 16:37:28.805573    1689 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0923 16:37:28.805580    1689 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f34d01}
	I0923 16:37:30.807648    1689 main.go:141] libmachine: Attempt 5
	I0923 16:37:30.807673    1689 main.go:141] libmachine: Searching for 26:de:e6:4c:40:74 in /var/db/dhcpd_leases ...
	I0923 16:37:30.807739    1689 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0923 16:37:30.807752    1689 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f34d01}
	I0923 16:37:32.809770    1689 main.go:141] libmachine: Attempt 6
	I0923 16:37:32.809794    1689 main.go:141] libmachine: Searching for 26:de:e6:4c:40:74 in /var/db/dhcpd_leases ...
	I0923 16:37:32.809862    1689 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0923 16:37:32.809872    1689 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f34d01}
	I0923 16:37:34.811918    1689 main.go:141] libmachine: Attempt 7
	I0923 16:37:34.811944    1689 main.go:141] libmachine: Searching for 26:de:e6:4c:40:74 in /var/db/dhcpd_leases ...
	I0923 16:37:34.812001    1689 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0923 16:37:34.812014    1689 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:26:de:e6:4c:40:74 ID:1,26:de:e6:4c:40:74 Lease:0x66f34d3d}
	I0923 16:37:34.812017    1689 main.go:141] libmachine: Found match: 26:de:e6:4c:40:74
	I0923 16:37:34.812025    1689 main.go:141] libmachine: IP: 192.168.105.2
	I0923 16:37:34.812030    1689 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0923 16:37:36.833242    1689 machine.go:93] provisionDockerMachine start ...
	I0923 16:37:36.834929    1689 main.go:141] libmachine: Using SSH client type: native
	I0923 16:37:36.835442    1689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100789c00] 0x10078c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0923 16:37:36.835460    1689 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 16:37:36.909752    1689 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 16:37:36.909783    1689 buildroot.go:166] provisioning hostname "addons-938000"
	I0923 16:37:36.909908    1689 main.go:141] libmachine: Using SSH client type: native
	I0923 16:37:36.910157    1689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100789c00] 0x10078c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0923 16:37:36.910167    1689 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-938000 && echo "addons-938000" | sudo tee /etc/hostname
	I0923 16:37:36.976934    1689 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-938000
	
	I0923 16:37:36.977035    1689 main.go:141] libmachine: Using SSH client type: native
	I0923 16:37:36.977215    1689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100789c00] 0x10078c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0923 16:37:36.977227    1689 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-938000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-938000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-938000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 16:37:37.032155    1689 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 16:37:37.032173    1689 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19696-1109/.minikube CaCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19696-1109/.minikube}
	I0923 16:37:37.032183    1689 buildroot.go:174] setting up certificates
	I0923 16:37:37.032188    1689 provision.go:84] configureAuth start
	I0923 16:37:37.032192    1689 provision.go:143] copyHostCerts
	I0923 16:37:37.032295    1689 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/key.pem (1679 bytes)
	I0923 16:37:37.032544    1689 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.pem (1082 bytes)
	I0923 16:37:37.032693    1689 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/cert.pem (1123 bytes)
	I0923 16:37:37.032806    1689 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem org=jenkins.addons-938000 san=[127.0.0.1 192.168.105.2 addons-938000 localhost minikube]
	I0923 16:37:37.284709    1689 provision.go:177] copyRemoteCerts
	I0923 16:37:37.284987    1689 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 16:37:37.285008    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:37:37.310635    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 16:37:37.319167    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 16:37:37.327332    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 16:37:37.335440    1689 provision.go:87] duration metric: took 303.247416ms to configureAuth
	I0923 16:37:37.335449    1689 buildroot.go:189] setting minikube options for container-runtime
	I0923 16:37:37.335555    1689 config.go:182] Loaded profile config "addons-938000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 16:37:37.335640    1689 main.go:141] libmachine: Using SSH client type: native
	I0923 16:37:37.335742    1689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100789c00] 0x10078c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0923 16:37:37.335748    1689 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 16:37:37.384166    1689 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 16:37:37.384176    1689 buildroot.go:70] root file system type: tmpfs
	I0923 16:37:37.384233    1689 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 16:37:37.384290    1689 main.go:141] libmachine: Using SSH client type: native
	I0923 16:37:37.384404    1689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100789c00] 0x10078c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0923 16:37:37.384446    1689 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 16:37:37.433079    1689 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 16:37:37.433134    1689 main.go:141] libmachine: Using SSH client type: native
	I0923 16:37:37.433236    1689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100789c00] 0x10078c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0923 16:37:37.433245    1689 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 16:37:38.809678    1689 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 16:37:38.809697    1689 machine.go:96] duration metric: took 1.976460333s to provisionDockerMachine
	I0923 16:37:38.809704    1689 client.go:171] duration metric: took 19.196716875s to LocalClient.Create
	I0923 16:37:38.809718    1689 start.go:167] duration metric: took 19.196791334s to libmachine.API.Create "addons-938000"
	I0923 16:37:38.809725    1689 start.go:293] postStartSetup for "addons-938000" (driver="qemu2")
	I0923 16:37:38.809732    1689 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 16:37:38.809819    1689 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 16:37:38.809830    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:37:38.836080    1689 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 16:37:38.838167    1689 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 16:37:38.838179    1689 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19696-1109/.minikube/addons for local assets ...
	I0923 16:37:38.838280    1689 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19696-1109/.minikube/files for local assets ...
	I0923 16:37:38.838318    1689 start.go:296] duration metric: took 28.590667ms for postStartSetup
	I0923 16:37:38.838776    1689 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/config.json ...
	I0923 16:37:38.839009    1689 start.go:128] duration metric: took 19.472625209s to createHost
	I0923 16:37:38.839046    1689 main.go:141] libmachine: Using SSH client type: native
	I0923 16:37:38.839139    1689 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100789c00] 0x10078c440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0923 16:37:38.839144    1689 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 16:37:38.887787    1689 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727134659.163140378
	
	I0923 16:37:38.887798    1689 fix.go:216] guest clock: 1727134659.163140378
	I0923 16:37:38.887802    1689 fix.go:229] Guest: 2024-09-23 16:37:39.163140378 -0700 PDT Remote: 2024-09-23 16:37:38.839012 -0700 PDT m=+19.580514876 (delta=324.128378ms)
	I0923 16:37:38.887818    1689 fix.go:200] guest clock delta is within tolerance: 324.128378ms
	I0923 16:37:38.887821    1689 start.go:83] releasing machines lock for "addons-938000", held for 19.521485s
	I0923 16:37:38.888173    1689 ssh_runner.go:195] Run: cat /version.json
	I0923 16:37:38.888183    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:37:38.888436    1689 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 16:37:38.888468    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:37:38.958749    1689 ssh_runner.go:195] Run: systemctl --version
	I0923 16:37:38.961199    1689 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 16:37:38.964520    1689 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 16:37:38.964554    1689 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 16:37:38.970037    1689 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 16:37:38.970045    1689 start.go:495] detecting cgroup driver to use...
	I0923 16:37:38.970169    1689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 16:37:38.976435    1689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 16:37:38.980023    1689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 16:37:38.983885    1689 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 16:37:38.983913    1689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 16:37:38.987589    1689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 16:37:38.991295    1689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 16:37:38.995335    1689 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 16:37:38.999255    1689 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 16:37:39.003210    1689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 16:37:39.007349    1689 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 16:37:39.011323    1689 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 16:37:39.015419    1689 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 16:37:39.019211    1689 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 16:37:39.019236    1689 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 16:37:39.027677    1689 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 16:37:39.032186    1689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 16:37:39.100966    1689 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 16:37:39.108938    1689 start.go:495] detecting cgroup driver to use...
	I0923 16:37:39.109012    1689 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 16:37:39.115131    1689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 16:37:39.123497    1689 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 16:37:39.132047    1689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 16:37:39.137577    1689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 16:37:39.142787    1689 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 16:37:39.183102    1689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 16:37:39.189316    1689 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 16:37:39.195831    1689 ssh_runner.go:195] Run: which cri-dockerd
	I0923 16:37:39.197322    1689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 16:37:39.200341    1689 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 16:37:39.205886    1689 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 16:37:39.272227    1689 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 16:37:39.343052    1689 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 16:37:39.343105    1689 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 16:37:39.349601    1689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 16:37:39.416588    1689 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 16:37:41.601187    1689 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.184625167s)
	I0923 16:37:41.601271    1689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 16:37:41.606808    1689 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0923 16:37:41.613898    1689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 16:37:41.619186    1689 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 16:37:41.692030    1689 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 16:37:41.755547    1689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 16:37:41.820417    1689 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 16:37:41.827195    1689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 16:37:41.833009    1689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 16:37:41.912375    1689 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 16:37:41.939087    1689 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 16:37:41.939395    1689 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 16:37:41.941948    1689 start.go:563] Will wait 60s for crictl version
	I0923 16:37:41.941993    1689 ssh_runner.go:195] Run: which crictl
	I0923 16:37:41.943408    1689 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 16:37:41.962021    1689 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0923 16:37:41.962098    1689 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 16:37:41.974720    1689 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 16:37:41.989635    1689 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0923 16:37:41.989996    1689 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0923 16:37:41.991441    1689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 16:37:41.995683    1689 kubeadm.go:883] updating cluster {Name:addons-938000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-938000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 16:37:41.995732    1689 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 16:37:41.995780    1689 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 16:37:42.000584    1689 docker.go:685] Got preloaded images: 
	I0923 16:37:42.000592    1689 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0923 16:37:42.000639    1689 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 16:37:42.004213    1689 ssh_runner.go:195] Run: which lz4
	I0923 16:37:42.005670    1689 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 16:37:42.007054    1689 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 16:37:42.007067    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0923 16:37:43.235076    1689 docker.go:649] duration metric: took 1.229480042s to copy over tarball
	I0923 16:37:43.235144    1689 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 16:37:44.203993    1689 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 16:37:44.219068    1689 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 16:37:44.223074    1689 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0923 16:37:44.229581    1689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 16:37:44.306678    1689 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 16:37:47.256355    1689 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.94970925s)
	I0923 16:37:47.256468    1689 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 16:37:47.262464    1689 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 16:37:47.262475    1689 cache_images.go:84] Images are preloaded, skipping loading
	I0923 16:37:47.262480    1689 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0923 16:37:47.262542    1689 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-938000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-938000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 16:37:47.262606    1689 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 16:37:47.281787    1689 cni.go:84] Creating CNI manager for ""
	I0923 16:37:47.281807    1689 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 16:37:47.281813    1689 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 16:37:47.281825    1689 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-938000 NodeName:addons-938000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 16:37:47.281910    1689 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-938000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 16:37:47.281969    1689 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 16:37:47.285850    1689 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 16:37:47.285887    1689 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 16:37:47.289273    1689 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 16:37:47.295138    1689 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 16:37:47.301080    1689 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0923 16:37:47.307083    1689 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0923 16:37:47.308396    1689 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 16:37:47.312141    1689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 16:37:47.376097    1689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 16:37:47.385629    1689 certs.go:68] Setting up /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000 for IP: 192.168.105.2
	I0923 16:37:47.385648    1689 certs.go:194] generating shared ca certs ...
	I0923 16:37:47.385658    1689 certs.go:226] acquiring lock for ca certs: {Name:mk0bd8a887d4e289277fd6cf7c9ed1b474966431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:47.385825    1689 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.key
	I0923 16:37:47.488607    1689 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt ...
	I0923 16:37:47.488616    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt: {Name:mkfea53ce9236a224a326f651536f5ca60473244 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:47.489091    1689 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.key ...
	I0923 16:37:47.489097    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.key: {Name:mk07ba770fda511469c66862e25c899c66122aae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:47.489266    1689 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.key
	I0923 16:37:47.677645    1689 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.crt ...
	I0923 16:37:47.677650    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.crt: {Name:mkf210f230832174f55edb99b3f88e80e1cd83cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:47.677870    1689 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.key ...
	I0923 16:37:47.677874    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.key: {Name:mk68ff00c3bde017ca548668a1e310991b23f12e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:47.678022    1689 certs.go:256] generating profile certs ...
	I0923 16:37:47.678072    1689 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.key
	I0923 16:37:47.678084    1689 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt with IP's: []
	I0923 16:37:47.749729    1689 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt ...
	I0923 16:37:47.749734    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: {Name:mkb8af1572cfd30316c1b630bff60fdff438f8e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:47.749927    1689 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.key ...
	I0923 16:37:47.749931    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.key: {Name:mk2ecb5cccf2e443885397d5f5ee1fade0d5c7da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:47.750072    1689 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.key.23dd2cfa
	I0923 16:37:47.750083    1689 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.crt.23dd2cfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0923 16:37:47.849932    1689 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.crt.23dd2cfa ...
	I0923 16:37:47.849936    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.crt.23dd2cfa: {Name:mkf1c33f9bcb6c0ed583f2ff98dce7f1304ceb38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:47.850090    1689 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.key.23dd2cfa ...
	I0923 16:37:47.850094    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.key.23dd2cfa: {Name:mk0b6efd0b6cef39402d5295c309820219620eba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:47.850229    1689 certs.go:381] copying /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.crt.23dd2cfa -> /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.crt
	I0923 16:37:47.850356    1689 certs.go:385] copying /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.key.23dd2cfa -> /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.key
	I0923 16:37:47.850465    1689 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/proxy-client.key
	I0923 16:37:47.850476    1689 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/proxy-client.crt with IP's: []
	I0923 16:37:47.941114    1689 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/proxy-client.crt ...
	I0923 16:37:47.941118    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/proxy-client.crt: {Name:mk0a37ab7f60a54b2418d60ed595eae5dc8f648c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:47.941290    1689 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/proxy-client.key ...
	I0923 16:37:47.941293    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/proxy-client.key: {Name:mk055ba0ed4e46e6a1bb0a9e9ebc95bd11872625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:47.941601    1689 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 16:37:47.941642    1689 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem (1082 bytes)
	I0923 16:37:47.941688    1689 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem (1123 bytes)
	I0923 16:37:47.941721    1689 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem (1679 bytes)
	I0923 16:37:47.942340    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 16:37:47.951774    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 16:37:47.960034    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 16:37:47.968140    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 16:37:47.976217    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 16:37:47.984526    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 16:37:47.992723    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 16:37:48.000760    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 16:37:48.009202    1689 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 16:37:48.025261    1689 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 16:37:48.033110    1689 ssh_runner.go:195] Run: openssl version
	I0923 16:37:48.035484    1689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 16:37:48.039874    1689 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 16:37:48.041559    1689 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0923 16:37:48.041590    1689 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 16:37:48.043822    1689 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 16:37:48.047588    1689 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 16:37:48.048946    1689 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 16:37:48.048989    1689 kubeadm.go:392] StartCluster: {Name:addons-938000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-938000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 16:37:48.049068    1689 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 16:37:48.054031    1689 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 16:37:48.057965    1689 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 16:37:48.061504    1689 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 16:37:48.065054    1689 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 16:37:48.065061    1689 kubeadm.go:157] found existing configuration files:
	
	I0923 16:37:48.065090    1689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 16:37:48.068158    1689 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 16:37:48.068189    1689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 16:37:48.071548    1689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 16:37:48.074932    1689 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 16:37:48.074961    1689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 16:37:48.078691    1689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 16:37:48.082281    1689 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 16:37:48.082316    1689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 16:37:48.085761    1689 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 16:37:48.088898    1689 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 16:37:48.088929    1689 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 16:37:48.092124    1689 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 16:37:48.114370    1689 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 16:37:48.114398    1689 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 16:37:48.157394    1689 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 16:37:48.157449    1689 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 16:37:48.157497    1689 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 16:37:48.161949    1689 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 16:37:48.178205    1689 out.go:235]   - Generating certificates and keys ...
	I0923 16:37:48.178239    1689 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 16:37:48.178269    1689 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 16:37:48.212187    1689 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 16:37:48.281096    1689 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 16:37:48.380034    1689 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 16:37:48.451357    1689 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 16:37:48.541069    1689 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 16:37:48.541157    1689 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-938000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0923 16:37:48.610704    1689 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 16:37:48.610768    1689 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-938000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0923 16:37:48.683066    1689 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 16:37:48.818100    1689 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 16:37:48.914074    1689 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 16:37:48.914134    1689 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 16:37:49.039910    1689 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 16:37:49.136469    1689 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 16:37:49.202648    1689 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 16:37:49.318695    1689 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 16:37:49.375096    1689 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 16:37:49.375433    1689 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 16:37:49.376809    1689 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 16:37:49.392019    1689 out.go:235]   - Booting up control plane ...
	I0923 16:37:49.392067    1689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 16:37:49.392108    1689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 16:37:49.392145    1689 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 16:37:49.392202    1689 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 16:37:49.392245    1689 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 16:37:49.392274    1689 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 16:37:49.473953    1689 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 16:37:49.474013    1689 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 16:37:49.975176    1689 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.269959ms
	I0923 16:37:49.975275    1689 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 16:37:52.977706    1689 kubeadm.go:310] [api-check] The API server is healthy after 3.00219821s
	I0923 16:37:52.992714    1689 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 16:37:53.003680    1689 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 16:37:53.015170    1689 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 16:37:53.015347    1689 kubeadm.go:310] [mark-control-plane] Marking the node addons-938000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 16:37:53.019832    1689 kubeadm.go:310] [bootstrap-token] Using token: iphycb.s6sg6gy1yi5f2s0k
	I0923 16:37:53.026084    1689 out.go:235]   - Configuring RBAC rules ...
	I0923 16:37:53.026150    1689 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 16:37:53.027356    1689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 16:37:53.033896    1689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 16:37:53.035136    1689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 16:37:53.036562    1689 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 16:37:53.037727    1689 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 16:37:53.390715    1689 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 16:37:53.788618    1689 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 16:37:54.385090    1689 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 16:37:54.385938    1689 kubeadm.go:310] 
	I0923 16:37:54.386011    1689 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 16:37:54.386018    1689 kubeadm.go:310] 
	I0923 16:37:54.386118    1689 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 16:37:54.386127    1689 kubeadm.go:310] 
	I0923 16:37:54.386154    1689 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 16:37:54.386237    1689 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 16:37:54.386304    1689 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 16:37:54.386314    1689 kubeadm.go:310] 
	I0923 16:37:54.386385    1689 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 16:37:54.386391    1689 kubeadm.go:310] 
	I0923 16:37:54.386481    1689 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 16:37:54.386490    1689 kubeadm.go:310] 
	I0923 16:37:54.386587    1689 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 16:37:54.386685    1689 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 16:37:54.386810    1689 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 16:37:54.386819    1689 kubeadm.go:310] 
	I0923 16:37:54.386934    1689 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 16:37:54.387039    1689 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 16:37:54.387045    1689 kubeadm.go:310] 
	I0923 16:37:54.387148    1689 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iphycb.s6sg6gy1yi5f2s0k \
	I0923 16:37:54.387299    1689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9f5effcd2afcb047ae3a6a2be3abef4aeae2e1c83fa3875bd26ffc0e053ab789 \
	I0923 16:37:54.387337    1689 kubeadm.go:310] 	--control-plane 
	I0923 16:37:54.387344    1689 kubeadm.go:310] 
	I0923 16:37:54.387522    1689 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 16:37:54.387536    1689 kubeadm.go:310] 
	I0923 16:37:54.387633    1689 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iphycb.s6sg6gy1yi5f2s0k \
	I0923 16:37:54.387765    1689 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9f5effcd2afcb047ae3a6a2be3abef4aeae2e1c83fa3875bd26ffc0e053ab789 
	I0923 16:37:54.388162    1689 kubeadm.go:310] W0923 23:37:48.388042    1597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 16:37:54.388567    1689 kubeadm.go:310] W0923 23:37:48.388543    1597 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 16:37:54.388724    1689 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 16:37:54.388739    1689 cni.go:84] Creating CNI manager for ""
	I0923 16:37:54.388755    1689 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 16:37:54.393096    1689 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 16:37:54.401274    1689 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 16:37:54.411355    1689 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 16:37:54.424392    1689 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 16:37:54.424487    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:54.424502    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-938000 minikube.k8s.io/updated_at=2024_09_23T16_37_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=addons-938000 minikube.k8s.io/primary=true
	I0923 16:37:54.443323    1689 ops.go:34] apiserver oom_adj: -16
	I0923 16:37:54.512808    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:55.014980    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:55.514917    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:56.014908    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:56.513889    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:57.014915    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:57.514891    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:58.014892    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:58.514856    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:59.014741    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:59.514792    1689 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 16:37:59.561540    1689 kubeadm.go:1113] duration metric: took 5.13723775s to wait for elevateKubeSystemPrivileges
	I0923 16:37:59.561557    1689 kubeadm.go:394] duration metric: took 11.512801834s to StartCluster
	I0923 16:37:59.561568    1689 settings.go:142] acquiring lock: {Name:mk533b8e20cbdc896b9e0666ee546603a1b156f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:59.561732    1689 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 16:37:59.561915    1689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/kubeconfig: {Name:mk52c76fc8ff944a7bcab52e821c0354dabfa3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:59.562171    1689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 16:37:59.562184    1689 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 16:37:59.562195    1689 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 16:37:59.562242    1689 addons.go:69] Setting yakd=true in profile "addons-938000"
	I0923 16:37:59.562250    1689 addons.go:234] Setting addon yakd=true in "addons-938000"
	I0923 16:37:59.562262    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.562271    1689 addons.go:69] Setting default-storageclass=true in profile "addons-938000"
	I0923 16:37:59.562279    1689 config.go:182] Loaded profile config "addons-938000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 16:37:59.562281    1689 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-938000"
	I0923 16:37:59.562274    1689 addons.go:69] Setting cloud-spanner=true in profile "addons-938000"
	I0923 16:37:59.562288    1689 addons.go:234] Setting addon cloud-spanner=true in "addons-938000"
	I0923 16:37:59.562297    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.562293    1689 addons.go:69] Setting inspektor-gadget=true in profile "addons-938000"
	I0923 16:37:59.562304    1689 addons.go:69] Setting storage-provisioner=true in profile "addons-938000"
	I0923 16:37:59.562317    1689 addons.go:69] Setting ingress=true in profile "addons-938000"
	I0923 16:37:59.562319    1689 addons.go:234] Setting addon inspektor-gadget=true in "addons-938000"
	I0923 16:37:59.562325    1689 addons.go:234] Setting addon storage-provisioner=true in "addons-938000"
	I0923 16:37:59.562331    1689 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-938000"
	I0923 16:37:59.562336    1689 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-938000"
	I0923 16:37:59.562344    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.562352    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.562354    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.562375    1689 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-938000"
	I0923 16:37:59.562385    1689 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-938000"
	I0923 16:37:59.562393    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.562427    1689 addons.go:69] Setting metrics-server=true in profile "addons-938000"
	I0923 16:37:59.562435    1689 addons.go:234] Setting addon metrics-server=true in "addons-938000"
	I0923 16:37:59.562446    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.562603    1689 addons.go:69] Setting volcano=true in profile "addons-938000"
	I0923 16:37:59.562604    1689 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-938000"
	I0923 16:37:59.562607    1689 addons.go:234] Setting addon volcano=true in "addons-938000"
	I0923 16:37:59.562609    1689 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-938000"
	I0923 16:37:59.562614    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.562683    1689 retry.go:31] will retry after 1.2738207s: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.562743    1689 retry.go:31] will retry after 546.337512ms: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.562749    1689 addons.go:69] Setting ingress-dns=true in profile "addons-938000"
	I0923 16:37:59.562752    1689 addons.go:234] Setting addon ingress-dns=true in "addons-938000"
	I0923 16:37:59.562762    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.562771    1689 retry.go:31] will retry after 869.915921ms: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.562818    1689 retry.go:31] will retry after 1.258914901s: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.562825    1689 addons.go:69] Setting registry=true in profile "addons-938000"
	I0923 16:37:59.562831    1689 addons.go:234] Setting addon registry=true in "addons-938000"
	I0923 16:37:59.562837    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.562837    1689 retry.go:31] will retry after 745.633883ms: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.562326    1689 addons.go:69] Setting gcp-auth=true in profile "addons-938000"
	I0923 16:37:59.562847    1689 mustload.go:65] Loading cluster: addons-938000
	I0923 16:37:59.562858    1689 retry.go:31] will retry after 1.031673933s: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.562864    1689 addons.go:69] Setting volumesnapshots=true in profile "addons-938000"
	I0923 16:37:59.562868    1689 addons.go:234] Setting addon volumesnapshots=true in "addons-938000"
	I0923 16:37:59.562874    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.562915    1689 config.go:182] Loaded profile config "addons-938000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 16:37:59.562959    1689 retry.go:31] will retry after 722.688843ms: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.562322    1689 addons.go:234] Setting addon ingress=true in "addons-938000"
	I0923 16:37:59.563053    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:37:59.563054    1689 retry.go:31] will retry after 521.743412ms: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.563057    1689 retry.go:31] will retry after 1.144226691s: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.563073    1689 retry.go:31] will retry after 1.024486873s: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.563080    1689 retry.go:31] will retry after 694.645809ms: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.563090    1689 retry.go:31] will retry after 1.052950825s: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.563254    1689 retry.go:31] will retry after 1.452532367s: connect: dial unix /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/monitor: connect: connection refused
	I0923 16:37:59.565014    1689 out.go:177] * Verifying Kubernetes components...
	I0923 16:37:59.574033    1689 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 16:37:59.578883    1689 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 16:37:59.578945    1689 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 16:37:59.583000    1689 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 16:37:59.583006    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 16:37:59.583012    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:37:59.585990    1689 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 16:37:59.585997    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 16:37:59.586002    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:37:59.619186    1689 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 16:37:59.689986    1689 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 16:37:59.721880    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 16:37:59.743873    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 16:38:00.037935    1689 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0923 16:38:00.039288    1689 node_ready.go:35] waiting up to 6m0s for node "addons-938000" to be "Ready" ...
	I0923 16:38:00.043992    1689 node_ready.go:49] node "addons-938000" has status "Ready":"True"
	I0923 16:38:00.044008    1689 node_ready.go:38] duration metric: took 4.700834ms for node "addons-938000" to be "Ready" ...
	I0923 16:38:00.044012    1689 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 16:38:00.050066    1689 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:00.092898    1689 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 16:38:00.096749    1689 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 16:38:00.100978    1689 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 16:38:00.101058    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 16:38:00.101069    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:00.110232    1689 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-938000"
	I0923 16:38:00.110254    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:38:00.116901    1689 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 16:38:00.125923    1689 out.go:177]   - Using image docker.io/busybox:stable
	I0923 16:38:00.128982    1689 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 16:38:00.128990    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 16:38:00.128999    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:00.135287    1689 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 16:38:00.135298    1689 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 16:38:00.153562    1689 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 16:38:00.153573    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 16:38:00.161471    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 16:38:00.167948    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 16:38:00.262951    1689 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 16:38:00.266983    1689 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 16:38:00.266992    1689 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 16:38:00.267003    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:00.290071    1689 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 16:38:00.293103    1689 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 16:38:00.293114    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 16:38:00.293125    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:00.312900    1689 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 16:38:00.316046    1689 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 16:38:00.316061    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 16:38:00.316073    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:00.325067    1689 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 16:38:00.325079    1689 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 16:38:00.337288    1689 addons.go:475] Verifying addon registry=true in "addons-938000"
	I0923 16:38:00.342314    1689 out.go:177] * Verifying registry addon...
	I0923 16:38:00.350072    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 16:38:00.350415    1689 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 16:38:00.356263    1689 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 16:38:00.356275    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:00.369449    1689 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 16:38:00.369462    1689 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 16:38:00.417728    1689 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 16:38:00.417741    1689 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 16:38:00.452220    1689 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 16:38:00.452234    1689 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 16:38:00.472648    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 16:38:00.507598    1689 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 16:38:00.507609    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 16:38:00.511540    1689 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 16:38:00.514877    1689 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 16:38:00.514904    1689 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 16:38:00.514951    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:00.543729    1689 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-938000" context rescaled to 1 replicas
	I0923 16:38:00.594934    1689 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 16:38:00.602585    1689 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 16:38:00.606259    1689 addons.go:234] Setting addon default-storageclass=true in "addons-938000"
	I0923 16:38:00.606281    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:38:00.606890    1689 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 16:38:00.606896    1689 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 16:38:00.606902    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:00.611879    1689 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 16:38:00.615915    1689 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 16:38:00.618828    1689 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 16:38:00.618831    1689 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 16:38:00.620607    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 16:38:00.622957    1689 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 16:38:00.622967    1689 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 16:38:00.622976    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:00.627880    1689 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 16:38:00.633180    1689 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 16:38:00.639887    1689 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 16:38:00.642993    1689 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 16:38:00.643008    1689 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 16:38:00.643021    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:00.676492    1689 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 16:38:00.676505    1689 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 16:38:00.709161    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:38:00.751973    1689 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 16:38:00.751989    1689 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 16:38:00.786248    1689 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 16:38:00.786264    1689 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 16:38:00.807171    1689 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 16:38:00.807190    1689 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 16:38:00.829395    1689 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 16:38:00.833965    1689 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 16:38:00.838014    1689 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 16:38:00.842377    1689 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 16:38:00.842386    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 16:38:00.842396    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:00.845630    1689 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 16:38:00.849828    1689 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 16:38:00.849840    1689 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 16:38:00.849850    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:00.852487    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 16:38:00.852578    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:00.870550    1689 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 16:38:00.870562    1689 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 16:38:00.874303    1689 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 16:38:00.874316    1689 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 16:38:00.911814    1689 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 16:38:00.911826    1689 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 16:38:00.980262    1689 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 16:38:00.980279    1689 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 16:38:00.984254    1689 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 16:38:00.984263    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 16:38:01.001016    1689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 16:38:01.001028    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 16:38:01.003678    1689 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 16:38:01.003684    1689 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 16:38:01.021966    1689 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 16:38:01.024869    1689 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 16:38:01.028928    1689 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 16:38:01.033021    1689 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 16:38:01.033034    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 16:38:01.033045    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:01.033184    1689 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 16:38:01.033191    1689 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 16:38:01.033532    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 16:38:01.037469    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 16:38:01.049462    1689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 16:38:01.049476    1689 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 16:38:01.058704    1689 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 16:38:01.058718    1689 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 16:38:01.087705    1689 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 16:38:01.087717    1689 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 16:38:01.114019    1689 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 16:38:01.114030    1689 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 16:38:01.120768    1689 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 16:38:01.120783    1689 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 16:38:01.161024    1689 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 16:38:01.161034    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 16:38:01.197943    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 16:38:01.260963    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 16:38:01.264670    1689 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 16:38:01.264686    1689 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 16:38:01.266982    1689 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 16:38:01.266987    1689 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 16:38:01.360982    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:01.402462    1689 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 16:38:01.402475    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 16:38:01.420803    1689 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 16:38:01.420815    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 16:38:01.513579    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 16:38:01.570188    1689 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 16:38:01.570199    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 16:38:01.633041    1689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.160392584s)
	I0923 16:38:01.655347    1689 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 16:38:01.655365    1689 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 16:38:01.737202    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 16:38:01.854295    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:02.054372    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:02.075119    1689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.222641084s)
	I0923 16:38:02.076828    1689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.45623575s)
	W0923 16:38:02.076846    1689 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 16:38:02.076859    1689 retry.go:31] will retry after 228.534674ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 16:38:02.307526    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 16:38:02.355707    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:02.852161    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:03.354731    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:03.854377    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:04.055201    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:04.378650    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:04.865539    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:05.109621    1689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (4.076155334s)
	I0923 16:38:05.109646    1689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.072246667s)
	I0923 16:38:05.109716    1689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.911839042s)
	I0923 16:38:05.109724    1689 addons.go:475] Verifying addon ingress=true in "addons-938000"
	I0923 16:38:05.109867    1689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.848964334s)
	I0923 16:38:05.110146    1689 addons.go:475] Verifying addon metrics-server=true in "addons-938000"
	I0923 16:38:05.109894    1689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.596373542s)
	I0923 16:38:05.109971    1689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.372823875s)
	I0923 16:38:05.110157    1689 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-938000"
	I0923 16:38:05.112855    1689 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-938000 service yakd-dashboard -n yakd-dashboard
	
	I0923 16:38:05.116906    1689 out.go:177] * Verifying ingress addon...
	I0923 16:38:05.127858    1689 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 16:38:05.139720    1689 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 16:38:05.143759    1689 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 16:38:05.205638    1689 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 16:38:05.205648    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:05.205774    1689 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 16:38:05.205780    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:05.390070    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:05.644068    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:05.645689    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:05.722055    1689 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.41456425s)
	I0923 16:38:05.854472    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:06.143602    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:06.145784    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:06.354423    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:06.555687    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:06.643306    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:06.647366    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:06.854662    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:07.143810    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:07.145431    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:07.354333    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:07.643679    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:07.645614    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:07.854770    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:08.144465    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:08.146770    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:08.315450    1689 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 16:38:08.315466    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:08.344778    1689 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 16:38:08.350516    1689 addons.go:234] Setting addon gcp-auth=true in "addons-938000"
	I0923 16:38:08.350543    1689 host.go:66] Checking if "addons-938000" exists ...
	I0923 16:38:08.351269    1689 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 16:38:08.351278    1689 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/addons-938000/id_rsa Username:docker}
	I0923 16:38:08.352902    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:08.382753    1689 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 16:38:08.386798    1689 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 16:38:08.390749    1689 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 16:38:08.390756    1689 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 16:38:08.397332    1689 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 16:38:08.397343    1689 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 16:38:08.405135    1689 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 16:38:08.405141    1689 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 16:38:08.415538    1689 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 16:38:08.643085    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:08.656963    1689 addons.go:475] Verifying addon gcp-auth=true in "addons-938000"
	I0923 16:38:08.663285    1689 out.go:177] * Verifying gcp-auth addon...
	I0923 16:38:08.671596    1689 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 16:38:08.743987    1689 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 16:38:08.744792    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:08.853913    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:09.055291    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:09.144227    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:09.145754    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:09.354226    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:09.644186    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:09.646105    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:09.852640    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:10.146782    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:10.146957    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:10.354209    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:10.644331    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:10.645944    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:10.854103    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:11.056025    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:11.143837    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:11.145664    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:11.354094    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:11.643846    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:11.645570    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:11.854226    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:12.144034    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:12.145401    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:12.353998    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:12.643414    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:12.645396    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:12.854492    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:13.144073    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:13.145803    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:13.354589    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:13.556206    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:13.642874    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:13.646348    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:13.853946    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:14.143883    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:14.145895    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:14.354036    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:14.643700    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:14.645202    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:14.853850    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:15.143430    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:15.145299    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:15.353862    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:15.643836    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:15.646467    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:15.854104    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:16.062256    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:16.143663    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:16.145251    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:16.354028    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:16.643451    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:16.646269    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:16.938984    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:17.144026    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:17.145860    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:17.353799    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:17.644121    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:17.645925    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:17.855584    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:18.145897    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:18.149729    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:18.354459    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:18.554286    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:18.643556    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:18.645235    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:18.855536    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:19.143901    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:19.145477    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:19.353989    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 16:38:19.642782    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:19.646953    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:19.856452    1689 kapi.go:107] duration metric: took 19.506425625s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 16:38:20.143486    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:20.145101    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:20.555015    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:20.643823    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:20.645495    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:21.144866    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:21.145988    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:21.643715    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:21.645111    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:22.143732    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:22.145181    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:22.643539    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:22.645085    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:23.054402    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:23.143611    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:23.145028    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:23.643405    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:23.645260    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:24.177349    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:24.177944    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:24.643175    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:24.645982    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:25.057665    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:25.148618    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:25.150623    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:25.643690    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:25.645161    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:26.144203    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:26.146113    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:26.643559    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:26.645078    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:27.142573    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:27.145639    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:27.554342    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:27.642154    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:27.645538    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:28.143840    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:28.244492    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:28.643779    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:28.645251    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:29.144573    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:29.151907    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:29.643419    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:29.645191    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:30.053452    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:30.143317    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:30.145219    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:30.643722    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:30.644929    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:31.143480    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:31.145149    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:31.643385    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:31.645022    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:32.054021    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:32.143254    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:32.145039    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:32.647613    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:32.650699    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:33.143279    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:33.144874    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:33.643435    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:33.644792    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:34.143509    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:34.144929    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:34.554481    1689 pod_ready.go:103] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"False"
	I0923 16:38:34.643312    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:34.644930    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:35.142994    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:35.145097    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:35.555639    1689 pod_ready.go:93] pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace has status "Ready":"True"
	I0923 16:38:35.555653    1689 pod_ready.go:82] duration metric: took 35.506290458s for pod "coredns-7c65d6cfc9-5dl4c" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:35.555660    1689 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m6jsj" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:35.556971    1689 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-m6jsj" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-m6jsj" not found
	I0923 16:38:35.556981    1689 pod_ready.go:82] duration metric: took 1.315958ms for pod "coredns-7c65d6cfc9-m6jsj" in "kube-system" namespace to be "Ready" ...
	E0923 16:38:35.556987    1689 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-m6jsj" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-m6jsj" not found
	I0923 16:38:35.556992    1689 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-938000" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:35.560718    1689 pod_ready.go:93] pod "etcd-addons-938000" in "kube-system" namespace has status "Ready":"True"
	I0923 16:38:35.560724    1689 pod_ready.go:82] duration metric: took 3.727792ms for pod "etcd-addons-938000" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:35.560728    1689 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-938000" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:35.563588    1689 pod_ready.go:93] pod "kube-apiserver-addons-938000" in "kube-system" namespace has status "Ready":"True"
	I0923 16:38:35.563596    1689 pod_ready.go:82] duration metric: took 2.864166ms for pod "kube-apiserver-addons-938000" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:35.563601    1689 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-938000" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:35.566036    1689 pod_ready.go:93] pod "kube-controller-manager-addons-938000" in "kube-system" namespace has status "Ready":"True"
	I0923 16:38:35.566043    1689 pod_ready.go:82] duration metric: took 2.437958ms for pod "kube-controller-manager-addons-938000" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:35.566048    1689 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bgw42" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:35.645312    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:35.647486    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:35.753748    1689 pod_ready.go:93] pod "kube-proxy-bgw42" in "kube-system" namespace has status "Ready":"True"
	I0923 16:38:35.753758    1689 pod_ready.go:82] duration metric: took 187.710167ms for pod "kube-proxy-bgw42" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:35.753763    1689 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-938000" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:36.143303    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:36.145171    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:36.154249    1689 pod_ready.go:93] pod "kube-scheduler-addons-938000" in "kube-system" namespace has status "Ready":"True"
	I0923 16:38:36.154255    1689 pod_ready.go:82] duration metric: took 400.497666ms for pod "kube-scheduler-addons-938000" in "kube-system" namespace to be "Ready" ...
	I0923 16:38:36.154259    1689 pod_ready.go:39] duration metric: took 36.110967708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 16:38:36.154268    1689 api_server.go:52] waiting for apiserver process to appear ...
	I0923 16:38:36.154714    1689 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 16:38:36.161803    1689 api_server.go:72] duration metric: took 36.600341917s to wait for apiserver process to appear ...
	I0923 16:38:36.161811    1689 api_server.go:88] waiting for apiserver healthz status ...
	I0923 16:38:36.161820    1689 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0923 16:38:36.164803    1689 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0923 16:38:36.165383    1689 api_server.go:141] control plane version: v1.31.1
	I0923 16:38:36.165391    1689 api_server.go:131] duration metric: took 3.576959ms to wait for apiserver health ...
	I0923 16:38:36.165395    1689 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 16:38:36.358870    1689 system_pods.go:59] 17 kube-system pods found
	I0923 16:38:36.358885    1689 system_pods.go:61] "coredns-7c65d6cfc9-5dl4c" [8c8ad3a9-4fa8-4d36-924c-c1be1d38057c] Running
	I0923 16:38:36.358889    1689 system_pods.go:61] "csi-hostpath-attacher-0" [1e4b795e-5a60-4830-86da-ffe4e1be718e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 16:38:36.358892    1689 system_pods.go:61] "csi-hostpath-resizer-0" [1c9db05c-29b4-4b09-8ac4-74dd7bdb93aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 16:38:36.358895    1689 system_pods.go:61] "csi-hostpathplugin-d74qm" [ddc6f18e-e69f-43ab-a482-9bb5084206ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 16:38:36.358898    1689 system_pods.go:61] "etcd-addons-938000" [5833a33e-faf4-46c5-9552-9a0b44cec0f6] Running
	I0923 16:38:36.358910    1689 system_pods.go:61] "kube-apiserver-addons-938000" [3fac481e-e5bb-4f22-8cf2-c989cef6b725] Running
	I0923 16:38:36.358914    1689 system_pods.go:61] "kube-controller-manager-addons-938000" [81af9948-d179-4af0-90e7-6f4c91c85379] Running
	I0923 16:38:36.358917    1689 system_pods.go:61] "kube-ingress-dns-minikube" [0088e524-e48f-4ac2-8009-4e28063d1779] Running
	I0923 16:38:36.358919    1689 system_pods.go:61] "kube-proxy-bgw42" [60f64fa1-4c71-4f9c-9b88-862e03e5320e] Running
	I0923 16:38:36.358921    1689 system_pods.go:61] "kube-scheduler-addons-938000" [61accfd9-b3ba-48b6-9a26-814f04f5e447] Running
	I0923 16:38:36.358923    1689 system_pods.go:61] "metrics-server-84c5f94fbc-9x8cv" [1fb0a34a-415b-42fa-8601-24f114da22ab] Running
	I0923 16:38:36.358925    1689 system_pods.go:61] "nvidia-device-plugin-daemonset-42w2d" [6219d98f-b5fd-406b-9358-a0e23f30e6ab] Running
	I0923 16:38:36.358927    1689 system_pods.go:61] "registry-66c9cd494c-h9ld7" [957ab26e-5223-48ff-90ce-62f677de8be0] Running
	I0923 16:38:36.358929    1689 system_pods.go:61] "registry-proxy-4znqx" [55be3d2c-a04d-4e79-ae58-eabab8942dc0] Running
	I0923 16:38:36.358931    1689 system_pods.go:61] "snapshot-controller-56fcc65765-5lvkd" [a2615145-de48-4cd1-af06-f3c49053302e] Running
	I0923 16:38:36.358932    1689 system_pods.go:61] "snapshot-controller-56fcc65765-r8kf8" [5defe3eb-6b30-46fb-8373-8c740b772422] Running
	I0923 16:38:36.358934    1689 system_pods.go:61] "storage-provisioner" [e85f2960-2ffb-4c3b-a84d-0dc2828fec0b] Running
	I0923 16:38:36.358937    1689 system_pods.go:74] duration metric: took 193.542334ms to wait for pod list to return data ...
	I0923 16:38:36.358941    1689 default_sa.go:34] waiting for default service account to be created ...
	I0923 16:38:36.554061    1689 default_sa.go:45] found service account: "default"
	I0923 16:38:36.554070    1689 default_sa.go:55] duration metric: took 195.13025ms for default service account to be created ...
	I0923 16:38:36.554075    1689 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 16:38:36.643809    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:36.645015    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:36.759368    1689 system_pods.go:86] 17 kube-system pods found
	I0923 16:38:36.759379    1689 system_pods.go:89] "coredns-7c65d6cfc9-5dl4c" [8c8ad3a9-4fa8-4d36-924c-c1be1d38057c] Running
	I0923 16:38:36.759384    1689 system_pods.go:89] "csi-hostpath-attacher-0" [1e4b795e-5a60-4830-86da-ffe4e1be718e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 16:38:36.759388    1689 system_pods.go:89] "csi-hostpath-resizer-0" [1c9db05c-29b4-4b09-8ac4-74dd7bdb93aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 16:38:36.759392    1689 system_pods.go:89] "csi-hostpathplugin-d74qm" [ddc6f18e-e69f-43ab-a482-9bb5084206ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 16:38:36.759394    1689 system_pods.go:89] "etcd-addons-938000" [5833a33e-faf4-46c5-9552-9a0b44cec0f6] Running
	I0923 16:38:36.759396    1689 system_pods.go:89] "kube-apiserver-addons-938000" [3fac481e-e5bb-4f22-8cf2-c989cef6b725] Running
	I0923 16:38:36.759398    1689 system_pods.go:89] "kube-controller-manager-addons-938000" [81af9948-d179-4af0-90e7-6f4c91c85379] Running
	I0923 16:38:36.759400    1689 system_pods.go:89] "kube-ingress-dns-minikube" [0088e524-e48f-4ac2-8009-4e28063d1779] Running
	I0923 16:38:36.759402    1689 system_pods.go:89] "kube-proxy-bgw42" [60f64fa1-4c71-4f9c-9b88-862e03e5320e] Running
	I0923 16:38:36.759408    1689 system_pods.go:89] "kube-scheduler-addons-938000" [61accfd9-b3ba-48b6-9a26-814f04f5e447] Running
	I0923 16:38:36.759410    1689 system_pods.go:89] "metrics-server-84c5f94fbc-9x8cv" [1fb0a34a-415b-42fa-8601-24f114da22ab] Running
	I0923 16:38:36.759412    1689 system_pods.go:89] "nvidia-device-plugin-daemonset-42w2d" [6219d98f-b5fd-406b-9358-a0e23f30e6ab] Running
	I0923 16:38:36.759414    1689 system_pods.go:89] "registry-66c9cd494c-h9ld7" [957ab26e-5223-48ff-90ce-62f677de8be0] Running
	I0923 16:38:36.759415    1689 system_pods.go:89] "registry-proxy-4znqx" [55be3d2c-a04d-4e79-ae58-eabab8942dc0] Running
	I0923 16:38:36.759418    1689 system_pods.go:89] "snapshot-controller-56fcc65765-5lvkd" [a2615145-de48-4cd1-af06-f3c49053302e] Running
	I0923 16:38:36.759422    1689 system_pods.go:89] "snapshot-controller-56fcc65765-r8kf8" [5defe3eb-6b30-46fb-8373-8c740b772422] Running
	I0923 16:38:36.759423    1689 system_pods.go:89] "storage-provisioner" [e85f2960-2ffb-4c3b-a84d-0dc2828fec0b] Running
	I0923 16:38:36.759427    1689 system_pods.go:126] duration metric: took 205.353167ms to wait for k8s-apps to be running ...
	I0923 16:38:36.759430    1689 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 16:38:36.759492    1689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 16:38:36.767405    1689 system_svc.go:56] duration metric: took 7.972625ms WaitForService to wait for kubelet
	I0923 16:38:36.767415    1689 kubeadm.go:582] duration metric: took 37.205967959s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 16:38:36.767426    1689 node_conditions.go:102] verifying NodePressure condition ...
	I0923 16:38:36.955455    1689 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 16:38:36.955465    1689 node_conditions.go:123] node cpu capacity is 2
	I0923 16:38:36.955471    1689 node_conditions.go:105] duration metric: took 188.046292ms to run NodePressure ...
	I0923 16:38:36.955477    1689 start.go:241] waiting for startup goroutines ...
	I0923 16:38:37.142754    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:37.145370    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:37.643164    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:37.645481    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:38.193459    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:38.193533    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:38.643144    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:38.645481    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:39.143636    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:39.144801    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:39.642060    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:39.645432    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:40.143387    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:40.145284    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:40.671015    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:40.671103    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:41.144342    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:41.145807    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:41.650721    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:41.653982    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:42.141842    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:42.145453    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:42.643432    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:42.644668    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:43.141857    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:43.145189    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:43.645770    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:43.647641    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:44.144148    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:44.145448    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:44.643110    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:44.644620    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:45.143425    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:45.144815    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:45.642977    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:45.644644    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:46.142922    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:46.144909    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:46.643019    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:46.645182    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:47.143193    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:47.144531    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:47.643394    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:47.644896    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:48.142870    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:48.144762    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:48.642930    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:48.644728    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:49.142949    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:49.144537    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:49.643138    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:49.644571    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:50.144974    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:50.146976    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:50.643533    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:50.645051    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:51.143040    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:51.144715    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:51.643156    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:51.644470    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:52.144952    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:52.146804    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:52.648608    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:52.651443    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:53.146392    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:53.147325    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:53.642940    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:53.644434    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:54.143187    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:54.144864    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:54.643107    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:54.644977    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:55.143213    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:55.144584    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:55.643091    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:55.644354    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:56.143228    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:56.144436    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:56.644048    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:56.647357    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:57.143471    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:57.145132    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:57.642931    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:57.644474    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:58.143301    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:58.144724    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:58.643292    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:58.644740    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:59.143414    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:38:59.144943    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:59.646445    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:38:59.646631    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:00.143023    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:00.144260    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:39:00.643171    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:00.644568    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:39:01.143138    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:01.144543    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:39:01.642191    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:01.644431    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:39:02.143200    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:02.144583    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:39:02.642694    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:02.644347    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:39:03.142936    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:03.144291    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:39:03.643282    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:03.644510    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 16:39:04.143766    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:04.144932    1689 kapi.go:107] duration metric: took 59.002362708s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 16:39:04.647936    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:05.144257    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:05.644660    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:06.151151    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:06.650262    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:07.151097    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:07.649889    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:08.143006    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:08.643008    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:09.142918    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:09.642664    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:10.142747    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:10.642580    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:11.142668    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:11.642383    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:12.142708    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:12.642690    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:13.142936    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:13.844679    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:14.145129    1689 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 16:39:14.643042    1689 kapi.go:107] duration metric: took 1m9.504719875s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 16:39:31.174994    1689 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 16:39:31.175011    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:31.679335    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:32.178464    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:32.678040    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:33.177811    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:33.675911    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:34.174164    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:34.675055    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:35.183019    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:35.678173    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:36.179184    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:36.679388    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:37.180762    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:37.677399    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:38.175472    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:38.677792    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:39.180980    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:39.680135    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:40.178448    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:40.678758    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:41.176780    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:41.680096    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:42.180838    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:42.679615    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:43.177272    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:43.679318    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:44.173957    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:44.677977    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:45.181451    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:45.683024    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:46.176738    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:46.673664    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:47.174696    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:47.674526    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:48.173038    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:48.678611    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:49.178277    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:49.675455    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:50.175810    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:50.673727    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:51.174364    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:51.679991    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:52.174884    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:52.678059    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:53.179243    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:53.677563    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:54.173471    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:54.674970    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:55.176780    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:55.680867    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:56.174773    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:56.680743    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:57.178466    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:57.678482    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:58.174916    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:58.677162    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:59.178046    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:39:59.678938    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:00.178796    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:00.679006    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:01.175062    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:01.679780    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:02.180004    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:02.680195    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:03.178017    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:03.675183    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:04.172773    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:04.674836    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:05.175800    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:05.680291    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:06.179820    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:06.674673    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:07.179219    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:07.679684    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:08.180242    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:08.679642    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:09.178108    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:09.676567    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:10.179518    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:10.680387    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:11.177030    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:11.676504    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:12.173008    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:12.673734    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:13.173131    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:13.673634    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:14.173653    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:14.673204    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:15.175133    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:15.672974    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:16.174189    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:16.674366    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:17.174207    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:17.673324    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:18.175461    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:18.674674    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:19.174608    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:19.674239    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:20.179704    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:20.678803    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:21.172552    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:21.674207    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:22.176841    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:22.676715    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:23.179906    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:23.675010    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:24.173135    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:24.672772    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:25.177985    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:25.677886    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:26.179116    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:26.674174    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:27.183953    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:27.674884    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:28.172677    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:28.672267    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:29.174990    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:29.673520    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:30.177175    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:30.680886    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:31.177214    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:31.677298    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:32.178655    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:32.679643    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:33.180128    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:33.673091    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:34.173214    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:34.674127    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:35.173633    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:35.672649    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:36.172502    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:36.672303    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:37.172243    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:37.672391    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:38.172782    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:38.673109    1689 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 16:40:39.173757    1689 kapi.go:107] duration metric: took 2m30.505177334s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 16:40:39.179224    1689 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-938000 cluster.
	I0923 16:40:39.183086    1689 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 16:40:39.187377    1689 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 16:40:39.196090    1689 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, ingress-dns, storage-provisioner, default-storageclass, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0923 16:40:39.204070    1689 addons.go:510] duration metric: took 2m39.645083625s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner-rancher ingress-dns storage-provisioner default-storageclass volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0923 16:40:39.204100    1689 start.go:246] waiting for cluster config update ...
	I0923 16:40:39.204118    1689 start.go:255] writing updated cluster config ...
	I0923 16:40:39.205445    1689 ssh_runner.go:195] Run: rm -f paused
	I0923 16:40:39.370154    1689 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0923 16:40:39.374114    1689 out.go:201] 
	W0923 16:40:39.377061    1689 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0923 16:40:39.381050    1689 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0923 16:40:39.388151    1689 out.go:177] * Done! kubectl is now configured to use "addons-938000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 23:50:21 addons-938000 dockerd[1290]: time="2024-09-23T23:50:21.528227552Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 23:50:25 addons-938000 dockerd[1284]: time="2024-09-23T23:50:25.642931452Z" level=info msg="ignoring event" container=f0ed18ec48f4809fe3f436f573fb3b19b3ac70396a972880e2c7409858cdb0a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:50:25 addons-938000 dockerd[1290]: time="2024-09-23T23:50:25.643195014Z" level=info msg="shim disconnected" id=f0ed18ec48f4809fe3f436f573fb3b19b3ac70396a972880e2c7409858cdb0a4 namespace=moby
	Sep 23 23:50:25 addons-938000 dockerd[1290]: time="2024-09-23T23:50:25.643225045Z" level=warning msg="cleaning up after shim disconnected" id=f0ed18ec48f4809fe3f436f573fb3b19b3ac70396a972880e2c7409858cdb0a4 namespace=moby
	Sep 23 23:50:25 addons-938000 dockerd[1290]: time="2024-09-23T23:50:25.643229508Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1284]: time="2024-09-23T23:50:29.091907727Z" level=info msg="ignoring event" container=263ac8c221873e44bca23beabddce5e3bfe0da4a5f3ee265f9b17c0479f575c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.092014622Z" level=info msg="shim disconnected" id=263ac8c221873e44bca23beabddce5e3bfe0da4a5f3ee265f9b17c0479f575c0 namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.092045068Z" level=warning msg="cleaning up after shim disconnected" id=263ac8c221873e44bca23beabddce5e3bfe0da4a5f3ee265f9b17c0479f575c0 namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.092049238Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1284]: time="2024-09-23T23:50:29.234594588Z" level=info msg="ignoring event" container=04c723100b00a7ec349d2f64c4f9f6f1f0670c09c0dddebdbdf5995580771a7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.234273737Z" level=info msg="shim disconnected" id=04c723100b00a7ec349d2f64c4f9f6f1f0670c09c0dddebdbdf5995580771a7e namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.234700815Z" level=warning msg="cleaning up after shim disconnected" id=04c723100b00a7ec349d2f64c4f9f6f1f0670c09c0dddebdbdf5995580771a7e namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.234716956Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1284]: time="2024-09-23T23:50:29.256275408Z" level=info msg="ignoring event" container=9843be58c504b55e943b1dbb92a14eaa42abfad48d3b5dd2c496939463e2d434 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.256546419Z" level=info msg="shim disconnected" id=9843be58c504b55e943b1dbb92a14eaa42abfad48d3b5dd2c496939463e2d434 namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.256620991Z" level=warning msg="cleaning up after shim disconnected" id=9843be58c504b55e943b1dbb92a14eaa42abfad48d3b5dd2c496939463e2d434 namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.256639801Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1284]: time="2024-09-23T23:50:29.325235186Z" level=info msg="ignoring event" container=44d90769ee7a9be8d2b81fa854c2ab364e457a2e8e42cf3bce3891f5254d51b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.325316389Z" level=info msg="shim disconnected" id=44d90769ee7a9be8d2b81fa854c2ab364e457a2e8e42cf3bce3891f5254d51b8 namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.325345459Z" level=warning msg="cleaning up after shim disconnected" id=44d90769ee7a9be8d2b81fa854c2ab364e457a2e8e42cf3bce3891f5254d51b8 namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.325350130Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1284]: time="2024-09-23T23:50:29.369623107Z" level=info msg="ignoring event" container=368b093c78de504822aaa80be8df3c15e7d1c00a0aed2eef12bda3a8b2b73d33 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.369694759Z" level=info msg="shim disconnected" id=368b093c78de504822aaa80be8df3c15e7d1c00a0aed2eef12bda3a8b2b73d33 namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.369826303Z" level=warning msg="cleaning up after shim disconnected" id=368b093c78de504822aaa80be8df3c15e7d1c00a0aed2eef12bda3a8b2b73d33 namespace=moby
	Sep 23 23:50:29 addons-938000 dockerd[1290]: time="2024-09-23T23:50:29.369831224Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	6490554859fbf       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   f5de6f21e2ef1       gcp-auth-89d5ffd79-lkfrj
	107622081f2bf       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   07d8465307ee5       ingress-nginx-controller-bc57996ff-wfhpk
	a9b420d07146e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   e2795f9e6305a       ingress-nginx-admission-patch-2wjx9
	70fb60ab2a338       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   0250feb860cd8       ingress-nginx-admission-create-k44mt
	2477ca2b45b04       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        11 minutes ago      Running             yakd                       0                   c0937341687b1       yakd-dashboard-67d98fc6b-7tdsj
	88c450d9cc58d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   2e8bc85b3267f       local-path-provisioner-86d989889c-7l2sl
	2df22be5df0e8       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   470e53122c8b0       kube-ingress-dns-minikube
	04c723100b00a       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                   0                   44d90769ee7a9       registry-66c9cd494c-h9ld7
	9843be58c504b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy             0                   368b093c78de5       registry-proxy-4znqx
	5579f20a939d8       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e               12 minutes ago      Running             cloud-spanner-emulator     0                   668c22fc6e189       cloud-spanner-emulator-5b584cc74-hzk8n
	169f7a2a725f7       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   17d2244d66e98       nvidia-device-plugin-daemonset-42w2d
	f53c2dff2e42b       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   93ff58140f020       storage-provisioner
	e2b56d422785a       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   43ee210be643e       kube-proxy-bgw42
	c527c3852540c       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   bc2b3d28e1828       coredns-7c65d6cfc9-5dl4c
	ed6a1dff1b660       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   9c73ede82d1c4       kube-scheduler-addons-938000
	a9c0034faacf2       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   a17fcc36421dc       kube-apiserver-addons-938000
	be8092649fae4       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   a25056e054ae2       kube-controller-manager-addons-938000
	d31854da4ec29       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   837642befe6d1       etcd-addons-938000
	
	
	==> controller_ingress [107622081f2b] <==
	W0923 23:39:13.968630       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0923 23:39:13.968737       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0923 23:39:13.973236       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0923 23:39:14.096883       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0923 23:39:14.108944       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0923 23:39:14.126903       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0923 23:39:14.132561       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"d0066517-ac3f-49d4-82e9-c7eb1ad73703", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0923 23:39:14.134161       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"4056e592-c7db-49ad-96b1-6327e7e41776", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0923 23:39:14.134228       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"6bc0855e-598f-48fd-b1bd-a0e1cd22a155", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0923 23:39:15.329356       7 nginx.go:317] "Starting NGINX process"
	I0923 23:39:15.329600       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0923 23:39:15.329868       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0923 23:39:15.330000       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0923 23:39:15.340020       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0923 23:39:15.340234       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-wfhpk"
	I0923 23:39:15.346412       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-wfhpk" node="addons-938000"
	I0923 23:39:15.356457       7 controller.go:213] "Backend successfully reloaded"
	I0923 23:39:15.356530       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0923 23:39:15.356556       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-wfhpk", UID:"7c57438c-7673-4fed-a64a-abca1dedf3f4", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [c527c3852540] <==
	Trace[65857794]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:38:30.259)
	Trace[65857794]: [30.000216644s] [30.000216644s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51151 - 47430 "HINFO IN 171288418615011562.2230294244682896114. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.011826563s
	[INFO] 10.244.0.6:57218 - 32919 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130153s
	[INFO] 10.244.0.6:57218 - 49810 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000141936s
	[INFO] 10.244.0.6:51247 - 26256 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000064993s
	[INFO] 10.244.0.6:51247 - 32918 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042135s
	[INFO] 10.244.0.6:57658 - 64327 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052544s
	[INFO] 10.244.0.6:57658 - 44869 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000045507s
	[INFO] 10.244.0.6:52885 - 64481 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000055042s
	[INFO] 10.244.0.6:52885 - 43744 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045965s
	[INFO] 10.244.0.6:54530 - 61853 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000026438s
	[INFO] 10.244.0.6:54530 - 7837 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000011741s
	[INFO] 10.244.0.25:60132 - 34795 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125926s
	[INFO] 10.244.0.25:46714 - 10151 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00003574s
	[INFO] 10.244.0.25:39195 - 22594 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000037115s
	[INFO] 10.244.0.25:60279 - 59037 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000027367s
	[INFO] 10.244.0.25:42183 - 45546 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000026368s
	[INFO] 10.244.0.25:35485 - 55672 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023369s
	[INFO] 10.244.0.25:53849 - 36552 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002140236s
	[INFO] 10.244.0.25:36775 - 27862 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001417465s
	
	
	==> describe nodes <==
	Name:               addons-938000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-938000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=addons-938000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T16_37_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-938000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 23:37:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-938000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 23:50:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 23:49:27 +0000   Mon, 23 Sep 2024 23:37:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 23:49:27 +0000   Mon, 23 Sep 2024 23:37:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 23:49:27 +0000   Mon, 23 Sep 2024 23:37:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 23:49:27 +0000   Mon, 23 Sep 2024 23:37:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-938000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3eab31137bc414197bb43e1877ae625
	  System UUID:                e3eab31137bc414197bb43e1877ae625
	  Boot ID:                    1b4a0ee7-3101-4ef6-8c78-7cc5026f3343
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-5b584cc74-hzk8n      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gcp-auth                    gcp-auth-89d5ffd79-lkfrj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-wfhpk    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-5dl4c                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-938000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-938000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-938000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-bgw42                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-938000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-42w2d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-7l2sl     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-7tdsj              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             388Mi (10%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-938000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-938000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-938000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-938000 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-938000 event: Registered Node addons-938000 in Controller
	
	
	==> dmesg <==
	[  +6.237603] kauditd_printk_skb: 8 callbacks suppressed
	[  +7.624426] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.939703] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.936908] kauditd_printk_skb: 19 callbacks suppressed
	[Sep23 23:39] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.420685] kauditd_printk_skb: 27 callbacks suppressed
	[ +13.989762] kauditd_printk_skb: 18 callbacks suppressed
	[Sep23 23:40] kauditd_printk_skb: 2 callbacks suppressed
	[ +15.971943] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.592266] kauditd_printk_skb: 2 callbacks suppressed
	[ +21.916835] kauditd_printk_skb: 9 callbacks suppressed
	[Sep23 23:41] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.359289] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.951664] kauditd_printk_skb: 2 callbacks suppressed
	[Sep23 23:44] kauditd_printk_skb: 2 callbacks suppressed
	[Sep23 23:49] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.409755] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.819906] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.934106] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.057671] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.543708] kauditd_printk_skb: 7 callbacks suppressed
	[Sep23 23:50] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.153517] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.411676] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.281844] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [d31854da4ec2] <==
	{"level":"info","ts":"2024-09-23T23:37:51.104187Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T23:37:51.104420Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:37:51.104479Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T23:37:51.105009Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T23:37:51.105530Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-23T23:37:51.105926Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:37:51.107964Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:37:51.107990Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:37:51.107965Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T23:37:51.108036Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T23:37:51.108240Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T23:37:51.108682Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T23:38:06.985877Z","caller":"traceutil/trace.go:171","msg":"trace[1057718671] transaction","detail":"{read_only:false; response_revision:880; number_of_response:1; }","duration":"126.140207ms","start":"2024-09-23T23:38:06.859728Z","end":"2024-09-23T23:38:06.985868Z","steps":["trace[1057718671] 'process raft request'  (duration: 126.079603ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:38:16.706865Z","caller":"traceutil/trace.go:171","msg":"trace[1210222717] transaction","detail":"{read_only:false; response_revision:961; number_of_response:1; }","duration":"258.887839ms","start":"2024-09-23T23:38:16.447968Z","end":"2024-09-23T23:38:16.706856Z","steps":["trace[1210222717] 'process raft request'  (duration: 258.702393ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:38:45.872211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.844994ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:38:45.872333Z","caller":"traceutil/trace.go:171","msg":"trace[899208419] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1096; }","duration":"167.973481ms","start":"2024-09-23T23:38:45.704352Z","end":"2024-09-23T23:38:45.872326Z","steps":["trace[899208419] 'range keys from in-memory index tree'  (duration: 167.815569ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:39:13.870387Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.193488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:39:13.870520Z","caller":"traceutil/trace.go:171","msg":"trace[1997008184] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1204; }","duration":"169.34195ms","start":"2024-09-23T23:39:13.701170Z","end":"2024-09-23T23:39:13.870512Z","steps":["trace[1997008184] 'range keys from in-memory index tree'  (duration: 169.169703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:39:13.870469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.754639ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:39:13.870965Z","caller":"traceutil/trace.go:171","msg":"trace[239830212] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1204; }","duration":"201.250799ms","start":"2024-09-23T23:39:13.669710Z","end":"2024-09-23T23:39:13.870961Z","steps":["trace[239830212] 'range keys from in-memory index tree'  (duration: 200.728438ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:41:01.512069Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.897543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-23T23:41:01.512105Z","caller":"traceutil/trace.go:171","msg":"trace[539410070] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1521; }","duration":"161.938492ms","start":"2024-09-23T23:41:01.350158Z","end":"2024-09-23T23:41:01.512096Z","steps":["trace[539410070] 'range keys from in-memory index tree'  (duration: 161.821729ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:47:51.187888Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1841}
	{"level":"info","ts":"2024-09-23T23:47:51.285043Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1841,"took":"93.881338ms","hash":2955286771,"current-db-size-bytes":8978432,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4734976,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-09-23T23:47:51.285524Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2955286771,"revision":1841,"compact-revision":-1}
	
	
	==> gcp-auth [6490554859fb] <==
	2024/09/23 23:40:37 GCP Auth Webhook started!
	2024/09/23 23:40:54 Ready to marshal response ...
	2024/09/23 23:40:54 Ready to write response ...
	2024/09/23 23:40:55 Ready to marshal response ...
	2024/09/23 23:40:55 Ready to write response ...
	2024/09/23 23:41:17 Ready to marshal response ...
	2024/09/23 23:41:17 Ready to write response ...
	2024/09/23 23:41:17 Ready to marshal response ...
	2024/09/23 23:41:17 Ready to write response ...
	2024/09/23 23:41:17 Ready to marshal response ...
	2024/09/23 23:41:17 Ready to write response ...
	2024/09/23 23:49:19 Ready to marshal response ...
	2024/09/23 23:49:19 Ready to write response ...
	2024/09/23 23:49:19 Ready to marshal response ...
	2024/09/23 23:49:19 Ready to write response ...
	2024/09/23 23:49:19 Ready to marshal response ...
	2024/09/23 23:49:19 Ready to write response ...
	2024/09/23 23:49:28 Ready to marshal response ...
	2024/09/23 23:49:28 Ready to write response ...
	2024/09/23 23:49:45 Ready to marshal response ...
	2024/09/23 23:49:45 Ready to write response ...
	2024/09/23 23:50:00 Ready to marshal response ...
	2024/09/23 23:50:00 Ready to write response ...
	
	
	==> kernel <==
	 23:50:29 up 12 min,  0 users,  load average: 1.04, 0.72, 0.50
	Linux addons-938000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a9c0034faacf] <==
	I0923 23:41:08.464609       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0923 23:41:08.962869       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0923 23:41:09.107221       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0923 23:41:09.208166       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0923 23:41:09.368285       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0923 23:41:09.430622       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0923 23:41:09.466008       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 23:41:09.582139       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0923 23:49:19.033507       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.14.165"}
	I0923 23:49:53.719142       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 23:50:14.824651       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:14.824673       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:14.835181       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:14.835200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:14.856818       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:14.856843       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:14.863488       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:14.863511       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:50:14.870179       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:50:14.870198       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 23:50:15.864693       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 23:50:15.871259       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0923 23:50:15.943160       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I0923 23:50:25.608373       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 23:50:26.716419       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [be8092649fae] <==
	W0923 23:50:16.992977       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:16.993054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:50:18.612415       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:18.612544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:50:19.577291       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:19.577387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:50:20.072314       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:20.072447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:50:20.346486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="2.294µs"
	W0923 23:50:21.950020       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:21.950155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:50:24.262311       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:24.262429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:50:25.585061       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:25.585094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 23:50:26.717562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:50:27.715650       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:27.715707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:50:29.048603       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0923 23:50:29.048621       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 23:50:29.200778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="2.711µs"
	I0923 23:50:29.427864       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0923 23:50:29.427901       1 shared_informer.go:320] Caches are synced for garbage collector
	W0923 23:50:29.521822       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:50:29.521847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [e2b56d422785] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 23:38:01.712506       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 23:38:01.719884       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0923 23:38:01.719947       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 23:38:01.807731       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 23:38:01.807754       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 23:38:01.807769       1 server_linux.go:169] "Using iptables Proxier"
	I0923 23:38:01.808615       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 23:38:01.808746       1 server.go:483] "Version info" version="v1.31.1"
	I0923 23:38:01.808753       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 23:38:01.809461       1 config.go:199] "Starting service config controller"
	I0923 23:38:01.809488       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 23:38:01.809531       1 config.go:105] "Starting endpoint slice config controller"
	I0923 23:38:01.809536       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 23:38:01.809829       1 config.go:328] "Starting node config controller"
	I0923 23:38:01.809833       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 23:38:01.910344       1 shared_informer.go:320] Caches are synced for node config
	I0923 23:38:01.910394       1 shared_informer.go:320] Caches are synced for service config
	I0923 23:38:01.910414       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ed6a1dff1b66] <==
	W0923 23:37:51.844868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 23:37:51.844878       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:37:51.844899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 23:37:51.844908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 23:37:51.844926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 23:37:51.844930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:37:51.844967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 23:37:51.844971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:37:51.845081       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 23:37:51.845096       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 23:37:52.645052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 23:37:52.645294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:37:52.681167       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 23:37:52.681199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:37:52.685378       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 23:37:52.685477       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 23:37:52.692457       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 23:37:52.692490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:37:52.726810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 23:37:52.726835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:37:52.833680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 23:37:52.833724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:37:52.882482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 23:37:52.882538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0923 23:37:54.724952       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 23:50:25 addons-938000 kubelet[2049]: I0923 23:50:25.831694    2049 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-p8msm\" (UniqueName: \"kubernetes.io/projected/54f1b879-64c5-4578-94e6-e625ef207af0-kube-api-access-p8msm\") on node \"addons-938000\" DevicePath \"\""
	Sep 23 23:50:25 addons-938000 kubelet[2049]: I0923 23:50:25.831700    2049 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/54f1b879-64c5-4578-94e6-e625ef207af0-debugfs\") on node \"addons-938000\" DevicePath \"\""
	Sep 23 23:50:26 addons-938000 kubelet[2049]: I0923 23:50:26.668954    2049 scope.go:117] "RemoveContainer" containerID="de04e4aecc68aa13d75a8aaeb970cd5bc188c257ccca7d375f5ab2c902d4441a"
	Sep 23 23:50:27 addons-938000 kubelet[2049]: I0923 23:50:27.925970    2049 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f1b879-64c5-4578-94e6-e625ef207af0" path="/var/lib/kubelet/pods/54f1b879-64c5-4578-94e6-e625ef207af0/volumes"
	Sep 23 23:50:28 addons-938000 kubelet[2049]: E0923 23:50:28.909643    2049 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9548d715-4d81-432a-a939-eb555f4aef17"
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.173989    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/daa61498-62c3-4371-8aa7-3487a294dc95-gcp-creds\") pod \"daa61498-62c3-4371-8aa7-3487a294dc95\" (UID: \"daa61498-62c3-4371-8aa7-3487a294dc95\") "
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.174033    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7222\" (UniqueName: \"kubernetes.io/projected/daa61498-62c3-4371-8aa7-3487a294dc95-kube-api-access-j7222\") pod \"daa61498-62c3-4371-8aa7-3487a294dc95\" (UID: \"daa61498-62c3-4371-8aa7-3487a294dc95\") "
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.174246    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/daa61498-62c3-4371-8aa7-3487a294dc95-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "daa61498-62c3-4371-8aa7-3487a294dc95" (UID: "daa61498-62c3-4371-8aa7-3487a294dc95"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.177695    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daa61498-62c3-4371-8aa7-3487a294dc95-kube-api-access-j7222" (OuterVolumeSpecName: "kube-api-access-j7222") pod "daa61498-62c3-4371-8aa7-3487a294dc95" (UID: "daa61498-62c3-4371-8aa7-3487a294dc95"). InnerVolumeSpecName "kube-api-access-j7222". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.274368    2049 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j7222\" (UniqueName: \"kubernetes.io/projected/daa61498-62c3-4371-8aa7-3487a294dc95-kube-api-access-j7222\") on node \"addons-938000\" DevicePath \"\""
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.274383    2049 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/daa61498-62c3-4371-8aa7-3487a294dc95-gcp-creds\") on node \"addons-938000\" DevicePath \"\""
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.375415    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4982\" (UniqueName: \"kubernetes.io/projected/957ab26e-5223-48ff-90ce-62f677de8be0-kube-api-access-c4982\") pod \"957ab26e-5223-48ff-90ce-62f677de8be0\" (UID: \"957ab26e-5223-48ff-90ce-62f677de8be0\") "
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.376571    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957ab26e-5223-48ff-90ce-62f677de8be0-kube-api-access-c4982" (OuterVolumeSpecName: "kube-api-access-c4982") pod "957ab26e-5223-48ff-90ce-62f677de8be0" (UID: "957ab26e-5223-48ff-90ce-62f677de8be0"). InnerVolumeSpecName "kube-api-access-c4982". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.475654    2049 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbz9q\" (UniqueName: \"kubernetes.io/projected/55be3d2c-a04d-4e79-ae58-eabab8942dc0-kube-api-access-zbz9q\") pod \"55be3d2c-a04d-4e79-ae58-eabab8942dc0\" (UID: \"55be3d2c-a04d-4e79-ae58-eabab8942dc0\") "
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.475693    2049 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-c4982\" (UniqueName: \"kubernetes.io/projected/957ab26e-5223-48ff-90ce-62f677de8be0-kube-api-access-c4982\") on node \"addons-938000\" DevicePath \"\""
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.476429    2049 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55be3d2c-a04d-4e79-ae58-eabab8942dc0-kube-api-access-zbz9q" (OuterVolumeSpecName: "kube-api-access-zbz9q") pod "55be3d2c-a04d-4e79-ae58-eabab8942dc0" (UID: "55be3d2c-a04d-4e79-ae58-eabab8942dc0"). InnerVolumeSpecName "kube-api-access-zbz9q". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.576071    2049 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zbz9q\" (UniqueName: \"kubernetes.io/projected/55be3d2c-a04d-4e79-ae58-eabab8942dc0-kube-api-access-zbz9q\") on node \"addons-938000\" DevicePath \"\""
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.722364    2049 scope.go:117] "RemoveContainer" containerID="9843be58c504b55e943b1dbb92a14eaa42abfad48d3b5dd2c496939463e2d434"
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.749180    2049 scope.go:117] "RemoveContainer" containerID="9843be58c504b55e943b1dbb92a14eaa42abfad48d3b5dd2c496939463e2d434"
	Sep 23 23:50:29 addons-938000 kubelet[2049]: E0923 23:50:29.749617    2049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 9843be58c504b55e943b1dbb92a14eaa42abfad48d3b5dd2c496939463e2d434" containerID="9843be58c504b55e943b1dbb92a14eaa42abfad48d3b5dd2c496939463e2d434"
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.749636    2049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"9843be58c504b55e943b1dbb92a14eaa42abfad48d3b5dd2c496939463e2d434"} err="failed to get container status \"9843be58c504b55e943b1dbb92a14eaa42abfad48d3b5dd2c496939463e2d434\": rpc error: code = Unknown desc = Error response from daemon: No such container: 9843be58c504b55e943b1dbb92a14eaa42abfad48d3b5dd2c496939463e2d434"
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.749645    2049 scope.go:117] "RemoveContainer" containerID="04c723100b00a7ec349d2f64c4f9f6f1f0670c09c0dddebdbdf5995580771a7e"
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.767636    2049 scope.go:117] "RemoveContainer" containerID="04c723100b00a7ec349d2f64c4f9f6f1f0670c09c0dddebdbdf5995580771a7e"
	Sep 23 23:50:29 addons-938000 kubelet[2049]: E0923 23:50:29.768308    2049 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 04c723100b00a7ec349d2f64c4f9f6f1f0670c09c0dddebdbdf5995580771a7e" containerID="04c723100b00a7ec349d2f64c4f9f6f1f0670c09c0dddebdbdf5995580771a7e"
	Sep 23 23:50:29 addons-938000 kubelet[2049]: I0923 23:50:29.768326    2049 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"04c723100b00a7ec349d2f64c4f9f6f1f0670c09c0dddebdbdf5995580771a7e"} err="failed to get container status \"04c723100b00a7ec349d2f64c4f9f6f1f0670c09c0dddebdbdf5995580771a7e\": rpc error: code = Unknown desc = Error response from daemon: No such container: 04c723100b00a7ec349d2f64c4f9f6f1f0670c09c0dddebdbdf5995580771a7e"
	
	
	==> storage-provisioner [f53c2dff2e42] <==
	I0923 23:38:02.389223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 23:38:02.397304       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 23:38:02.397332       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 23:38:02.406911       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 23:38:02.406995       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-938000_976df8b8-a567-4cd3-b07a-f8ad30194b5c!
	I0923 23:38:02.409596       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6de9cc4a-b1c1-4f67-9e38-353ccbbf7462", APIVersion:"v1", ResourceVersion:"519", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-938000_976df8b8-a567-4cd3-b07a-f8ad30194b5c became leader
	I0923 23:38:02.513665       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-938000_976df8b8-a567-4cd3-b07a-f8ad30194b5c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-938000 -n addons-938000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-938000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-k44mt ingress-nginx-admission-patch-2wjx9
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-938000 describe pod busybox ingress-nginx-admission-create-k44mt ingress-nginx-admission-patch-2wjx9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-938000 describe pod busybox ingress-nginx-admission-create-k44mt ingress-nginx-admission-patch-2wjx9: exit status 1 (41.047917ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-938000/192.168.105.2
	Start Time:       Mon, 23 Sep 2024 16:41:17 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lhglq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lhglq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m13s                  default-scheduler  Successfully assigned default/busybox to addons-938000
	  Normal   Pulling    7m55s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m54s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m54s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m5s (x21 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-k44mt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2wjx9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-938000 describe pod busybox ingress-nginx-admission-create-k44mt ingress-nginx-admission-patch-2wjx9: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.30s)

                                                
                                    
x
+
TestCertOptions (10.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-849000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-849000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.881872709s)

                                                
                                                
-- stdout --
	* [cert-options-849000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-849000" primary control-plane node in "cert-options-849000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-849000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-849000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-849000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-849000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-849000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.97125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-849000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-849000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-849000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-849000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-849000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-849000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.884458ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-849000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-849000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-849000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-849000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-849000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-23 17:16:15.972787 -0700 PDT m=+2370.005313335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-849000 -n cert-options-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-849000 -n cert-options-849000: exit status 7 (30.6095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-849000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-849000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-849000
--- FAIL: TestCertOptions (10.15s)

                                                
                                    
x
+
TestCertExpiration (195.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-029000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-029000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.062131291s)

                                                
                                                
-- stdout --
	* [cert-expiration-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-029000" primary control-plane node in "cert-expiration-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-029000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-029000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-029000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.230709333s)

                                                
                                                
-- stdout --
	* [cert-expiration-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-029000" primary control-plane node in "cert-expiration-029000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-029000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-029000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-029000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-029000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-029000" primary control-plane node in "cert-expiration-029000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-029000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-029000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-029000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-23 17:19:16.163074 -0700 PDT m=+2550.129707418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-029000 -n cert-expiration-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-029000 -n cert-expiration-029000: exit status 7 (58.857875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-029000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-029000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-029000
--- FAIL: TestCertExpiration (195.44s)

                                                
                                    
x
+
TestDockerFlags (10.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-241000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-241000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.87549675s)

                                                
                                                
-- stdout --
	* [docker-flags-241000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-241000" primary control-plane node in "docker-flags-241000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-241000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:15:55.856654    4275 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:15:55.856797    4275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:15:55.856800    4275 out.go:358] Setting ErrFile to fd 2...
	I0923 17:15:55.856803    4275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:15:55.856922    4275 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:15:55.857972    4275 out.go:352] Setting JSON to false
	I0923 17:15:55.873827    4275 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2718,"bootTime":1727134237,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:15:55.873905    4275 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:15:55.881578    4275 out.go:177] * [docker-flags-241000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:15:55.888447    4275 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:15:55.888498    4275 notify.go:220] Checking for updates...
	I0923 17:15:55.900378    4275 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:15:55.903412    4275 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:15:55.906415    4275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:15:55.909374    4275 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:15:55.912375    4275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:15:55.915621    4275 config.go:182] Loaded profile config "force-systemd-flag-263000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:15:55.915686    4275 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:15:55.915731    4275 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:15:55.920331    4275 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:15:55.927361    4275 start.go:297] selected driver: qemu2
	I0923 17:15:55.927367    4275 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:15:55.927378    4275 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:15:55.929604    4275 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:15:55.932357    4275 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:15:55.935492    4275 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0923 17:15:55.935520    4275 cni.go:84] Creating CNI manager for ""
	I0923 17:15:55.935543    4275 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:15:55.935553    4275 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:15:55.935594    4275 start.go:340] cluster config:
	{Name:docker-flags-241000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-241000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:15:55.939416    4275 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:55.946327    4275 out.go:177] * Starting "docker-flags-241000" primary control-plane node in "docker-flags-241000" cluster
	I0923 17:15:55.950229    4275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:15:55.950245    4275 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:15:55.950251    4275 cache.go:56] Caching tarball of preloaded images
	I0923 17:15:55.950319    4275 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:15:55.950332    4275 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:15:55.950395    4275 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/docker-flags-241000/config.json ...
	I0923 17:15:55.950412    4275 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/docker-flags-241000/config.json: {Name:mk8a6de9dacfc75350b2fa063762c1b8e52320d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:15:55.950619    4275 start.go:360] acquireMachinesLock for docker-flags-241000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:15:55.950654    4275 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "docker-flags-241000"
	I0923 17:15:55.950667    4275 start.go:93] Provisioning new machine with config: &{Name:docker-flags-241000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-241000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:15:55.950696    4275 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:15:55.959399    4275 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 17:15:55.977260    4275 start.go:159] libmachine.API.Create for "docker-flags-241000" (driver="qemu2")
	I0923 17:15:55.977288    4275 client.go:168] LocalClient.Create starting
	I0923 17:15:55.977346    4275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:15:55.977373    4275 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:55.977382    4275 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:55.977421    4275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:15:55.977443    4275 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:55.977450    4275 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:55.977785    4275 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:15:56.139969    4275 main.go:141] libmachine: Creating SSH key...
	I0923 17:15:56.199203    4275 main.go:141] libmachine: Creating Disk image...
	I0923 17:15:56.199208    4275 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:15:56.199431    4275 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2
	I0923 17:15:56.208472    4275 main.go:141] libmachine: STDOUT: 
	I0923 17:15:56.208495    4275 main.go:141] libmachine: STDERR: 
	I0923 17:15:56.208556    4275 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2 +20000M
	I0923 17:15:56.216385    4275 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:15:56.216399    4275 main.go:141] libmachine: STDERR: 
	I0923 17:15:56.216414    4275 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2
	I0923 17:15:56.216421    4275 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:15:56.216434    4275 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:15:56.216472    4275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:75:d1:32:5c:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2
	I0923 17:15:56.218085    4275 main.go:141] libmachine: STDOUT: 
	I0923 17:15:56.218103    4275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:15:56.218122    4275 client.go:171] duration metric: took 240.836459ms to LocalClient.Create
	I0923 17:15:58.220232    4275 start.go:128] duration metric: took 2.269589875s to createHost
	I0923 17:15:58.220292    4275 start.go:83] releasing machines lock for "docker-flags-241000", held for 2.269702625s
	W0923 17:15:58.220364    4275 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:58.232714    4275 out.go:177] * Deleting "docker-flags-241000" in qemu2 ...
	W0923 17:15:58.274868    4275 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:58.274888    4275 start.go:729] Will try again in 5 seconds ...
	I0923 17:16:03.277010    4275 start.go:360] acquireMachinesLock for docker-flags-241000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:16:03.282552    4275 start.go:364] duration metric: took 5.420584ms to acquireMachinesLock for "docker-flags-241000"
	I0923 17:16:03.282712    4275 start.go:93] Provisioning new machine with config: &{Name:docker-flags-241000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-241000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:16:03.283007    4275 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:16:03.294722    4275 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 17:16:03.346059    4275 start.go:159] libmachine.API.Create for "docker-flags-241000" (driver="qemu2")
	I0923 17:16:03.346107    4275 client.go:168] LocalClient.Create starting
	I0923 17:16:03.346223    4275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:16:03.346299    4275 main.go:141] libmachine: Decoding PEM data...
	I0923 17:16:03.346316    4275 main.go:141] libmachine: Parsing certificate...
	I0923 17:16:03.346379    4275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:16:03.346430    4275 main.go:141] libmachine: Decoding PEM data...
	I0923 17:16:03.346450    4275 main.go:141] libmachine: Parsing certificate...
	I0923 17:16:03.346913    4275 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:16:03.543562    4275 main.go:141] libmachine: Creating SSH key...
	I0923 17:16:03.623579    4275 main.go:141] libmachine: Creating Disk image...
	I0923 17:16:03.623585    4275 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:16:03.623797    4275 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2
	I0923 17:16:03.633132    4275 main.go:141] libmachine: STDOUT: 
	I0923 17:16:03.633145    4275 main.go:141] libmachine: STDERR: 
	I0923 17:16:03.633205    4275 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2 +20000M
	I0923 17:16:03.640994    4275 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:16:03.641018    4275 main.go:141] libmachine: STDERR: 
	I0923 17:16:03.641029    4275 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2
	I0923 17:16:03.641034    4275 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:16:03.641041    4275 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:16:03.641068    4275 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:86:10:8f:ec:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/docker-flags-241000/disk.qcow2
	I0923 17:16:03.642690    4275 main.go:141] libmachine: STDOUT: 
	I0923 17:16:03.642703    4275 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:16:03.642717    4275 client.go:171] duration metric: took 296.612917ms to LocalClient.Create
	I0923 17:16:05.644871    4275 start.go:128] duration metric: took 2.361901708s to createHost
	I0923 17:16:05.644955    4275 start.go:83] releasing machines lock for "docker-flags-241000", held for 2.362454916s
	W0923 17:16:05.645444    4275 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-241000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-241000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:16:05.659019    4275 out.go:201] 
	W0923 17:16:05.676445    4275 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:16:05.676472    4275 out.go:270] * 
	* 
	W0923 17:16:05.678328    4275 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:16:05.690118    4275 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-241000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-241000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-241000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.09125ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-241000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-241000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-241000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-241000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-241000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-241000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-241000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-241000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-241000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.464083ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-241000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-241000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-241000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-241000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-241000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-241000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-23 17:16:05.82804 -0700 PDT m=+2359.860235668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-241000 -n docker-flags-241000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-241000 -n docker-flags-241000: exit status 7 (29.423084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-241000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-241000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-241000
--- FAIL: TestDockerFlags (10.11s)

                                                
                                    
x
+
TestForceSystemdFlag (10.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-263000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-263000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.97529525s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-263000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-263000" primary control-plane node in "force-systemd-flag-263000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-263000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:15:50.656597    4254 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:15:50.656750    4254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:15:50.656754    4254 out.go:358] Setting ErrFile to fd 2...
	I0923 17:15:50.656756    4254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:15:50.656882    4254 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:15:50.657942    4254 out.go:352] Setting JSON to false
	I0923 17:15:50.673859    4254 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2713,"bootTime":1727134237,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:15:50.673930    4254 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:15:50.681871    4254 out.go:177] * [force-systemd-flag-263000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:15:50.701967    4254 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:15:50.701994    4254 notify.go:220] Checking for updates...
	I0923 17:15:50.715847    4254 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:15:50.718768    4254 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:15:50.721863    4254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:15:50.724883    4254 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:15:50.726328    4254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:15:50.730180    4254 config.go:182] Loaded profile config "force-systemd-env-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:15:50.730262    4254 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:15:50.730317    4254 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:15:50.734905    4254 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:15:50.740794    4254 start.go:297] selected driver: qemu2
	I0923 17:15:50.740800    4254 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:15:50.740806    4254 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:15:50.743205    4254 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:15:50.746880    4254 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:15:50.749946    4254 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 17:15:50.749963    4254 cni.go:84] Creating CNI manager for ""
	I0923 17:15:50.750015    4254 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:15:50.750021    4254 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:15:50.750057    4254 start.go:340] cluster config:
	{Name:force-systemd-flag-263000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:15:50.753987    4254 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:50.761847    4254 out.go:177] * Starting "force-systemd-flag-263000" primary control-plane node in "force-systemd-flag-263000" cluster
	I0923 17:15:50.765772    4254 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:15:50.765789    4254 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:15:50.765796    4254 cache.go:56] Caching tarball of preloaded images
	I0923 17:15:50.765877    4254 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:15:50.765883    4254 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:15:50.765963    4254 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/force-systemd-flag-263000/config.json ...
	I0923 17:15:50.765976    4254 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/force-systemd-flag-263000/config.json: {Name:mk8699b642f2de71520cf4e036efd3e38de71dca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:15:50.766237    4254 start.go:360] acquireMachinesLock for force-systemd-flag-263000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:15:50.766277    4254 start.go:364] duration metric: took 32.375µs to acquireMachinesLock for "force-systemd-flag-263000"
	I0923 17:15:50.766293    4254 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:15:50.766325    4254 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:15:50.774764    4254 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 17:15:50.795288    4254 start.go:159] libmachine.API.Create for "force-systemd-flag-263000" (driver="qemu2")
	I0923 17:15:50.795325    4254 client.go:168] LocalClient.Create starting
	I0923 17:15:50.795395    4254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:15:50.795430    4254 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:50.795441    4254 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:50.795491    4254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:15:50.795517    4254 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:50.795529    4254 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:50.795916    4254 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:15:50.958147    4254 main.go:141] libmachine: Creating SSH key...
	I0923 17:15:51.150973    4254 main.go:141] libmachine: Creating Disk image...
	I0923 17:15:51.150980    4254 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:15:51.151220    4254 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2
	I0923 17:15:51.160945    4254 main.go:141] libmachine: STDOUT: 
	I0923 17:15:51.160963    4254 main.go:141] libmachine: STDERR: 
	I0923 17:15:51.161017    4254 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2 +20000M
	I0923 17:15:51.169084    4254 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:15:51.169104    4254 main.go:141] libmachine: STDERR: 
	I0923 17:15:51.169119    4254 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2
	I0923 17:15:51.169138    4254 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:15:51.169150    4254 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:15:51.169181    4254 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:a0:21:97:33:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2
	I0923 17:15:51.170871    4254 main.go:141] libmachine: STDOUT: 
	I0923 17:15:51.170885    4254 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:15:51.170907    4254 client.go:171] duration metric: took 375.586166ms to LocalClient.Create
	I0923 17:15:53.173050    4254 start.go:128] duration metric: took 2.406769s to createHost
	I0923 17:15:53.173145    4254 start.go:83] releasing machines lock for "force-systemd-flag-263000", held for 2.40693625s
	W0923 17:15:53.173238    4254 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:53.193382    4254 out.go:177] * Deleting "force-systemd-flag-263000" in qemu2 ...
	W0923 17:15:53.220192    4254 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:53.220220    4254 start.go:729] Will try again in 5 seconds ...
	I0923 17:15:58.222275    4254 start.go:360] acquireMachinesLock for force-systemd-flag-263000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:15:58.222587    4254 start.go:364] duration metric: took 230.083µs to acquireMachinesLock for "force-systemd-flag-263000"
	I0923 17:15:58.222644    4254 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-263000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-263000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:15:58.222903    4254 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:15:58.241658    4254 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 17:15:58.290924    4254 start.go:159] libmachine.API.Create for "force-systemd-flag-263000" (driver="qemu2")
	I0923 17:15:58.290973    4254 client.go:168] LocalClient.Create starting
	I0923 17:15:58.291103    4254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:15:58.291174    4254 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:58.291192    4254 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:58.291244    4254 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:15:58.291290    4254 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:58.291302    4254 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:58.292014    4254 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:15:58.485347    4254 main.go:141] libmachine: Creating SSH key...
	I0923 17:15:58.518134    4254 main.go:141] libmachine: Creating Disk image...
	I0923 17:15:58.518139    4254 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:15:58.518347    4254 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2
	I0923 17:15:58.527444    4254 main.go:141] libmachine: STDOUT: 
	I0923 17:15:58.527469    4254 main.go:141] libmachine: STDERR: 
	I0923 17:15:58.527533    4254 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2 +20000M
	I0923 17:15:58.535302    4254 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:15:58.535325    4254 main.go:141] libmachine: STDERR: 
	I0923 17:15:58.535338    4254 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2
	I0923 17:15:58.535341    4254 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:15:58.535350    4254 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:15:58.535381    4254 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:2d:9f:98:4a:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-flag-263000/disk.qcow2
	I0923 17:15:58.536953    4254 main.go:141] libmachine: STDOUT: 
	I0923 17:15:58.536973    4254 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:15:58.536987    4254 client.go:171] duration metric: took 246.016208ms to LocalClient.Create
	I0923 17:16:00.538011    4254 start.go:128] duration metric: took 2.31511875s to createHost
	I0923 17:16:00.538095    4254 start.go:83] releasing machines lock for "force-systemd-flag-263000", held for 2.315562375s
	W0923 17:16:00.538446    4254 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-263000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-263000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:16:00.554907    4254 out.go:201] 
	W0923 17:16:00.572513    4254 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:16:00.572577    4254 out.go:270] * 
	* 
	W0923 17:16:00.575275    4254 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:16:00.591031    4254 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-263000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-263000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-263000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.365208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-263000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-263000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-263000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-23 17:16:00.686955 -0700 PDT m=+2354.718982960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-263000 -n force-systemd-flag-263000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-263000 -n force-systemd-flag-263000: exit status 7 (34.492958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-263000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-263000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-263000
--- FAIL: TestForceSystemdFlag (10.17s)

                                                
                                    
x
+
TestForceSystemdEnv (10.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-831000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I0923 17:15:45.484046    1596 install.go:79] stdout: 
W0923 17:15:45.484150    1596 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0923 17:15:45.484166    1596 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/001/docker-machine-driver-hyperkit]
I0923 17:15:45.494096    1596 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/001/docker-machine-driver-hyperkit]
I0923 17:15:45.502871    1596 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/001/docker-machine-driver-hyperkit]
I0923 17:15:45.511824    1596 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/001/docker-machine-driver-hyperkit]
I0923 17:15:45.528216    1596 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 17:15:45.528318    1596 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I0923 17:15:47.308957    1596 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0923 17:15:47.308977    1596 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0923 17:15:47.309017    1596 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0923 17:15:47.309050    1596 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/002/docker-machine-driver-hyperkit
I0923 17:15:47.709853    1596 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10470ad40 0x10470ad40 0x10470ad40 0x10470ad40 0x10470ad40 0x10470ad40 0x10470ad40] Decompressors:map[bz2:0x1400012b820 gz:0x1400012b828 tar:0x1400012b760 tar.bz2:0x1400012b7b0 tar.gz:0x1400012b7c0 tar.xz:0x1400012b7d0 tar.zst:0x1400012b7e0 tbz2:0x1400012b7b0 tgz:0x1400012b7c0 txz:0x1400012b7d0 tzst:0x1400012b7e0 xz:0x1400012b840 zip:0x1400012b870 zst:0x1400012b848] Getters:map[file:0x140008fd4c0 http:0x14000026960 https:0x140000269b0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0923 17:15:47.709995    1596 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/002/docker-machine-driver-hyperkit
I0923 17:15:50.583666    1596 install.go:79] stdout: 
W0923 17:15:50.583836    1596 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0923 17:15:50.583871    1596 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/002/docker-machine-driver-hyperkit]
I0923 17:15:50.598366    1596 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/002/docker-machine-driver-hyperkit]
I0923 17:15:50.609935    1596 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/002/docker-machine-driver-hyperkit]
I0923 17:15:50.618693    1596 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-831000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.579719125s)

                                                
                                                
-- stdout --
	* [force-systemd-env-831000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-831000" primary control-plane node in "force-systemd-env-831000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:15:45.081498    4222 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:15:45.081624    4222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:15:45.081628    4222 out.go:358] Setting ErrFile to fd 2...
	I0923 17:15:45.081630    4222 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:15:45.081763    4222 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:15:45.082779    4222 out.go:352] Setting JSON to false
	I0923 17:15:45.098978    4222 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2708,"bootTime":1727134237,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:15:45.099045    4222 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:15:45.106271    4222 out.go:177] * [force-systemd-env-831000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:15:45.116091    4222 notify.go:220] Checking for updates...
	I0923 17:15:45.122023    4222 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:15:45.130850    4222 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:15:45.138968    4222 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:15:45.146979    4222 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:15:45.154049    4222 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:15:45.161984    4222 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0923 17:15:45.166269    4222 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:15:45.166316    4222 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:15:45.169019    4222 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:15:45.176926    4222 start.go:297] selected driver: qemu2
	I0923 17:15:45.176931    4222 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:15:45.176936    4222 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:15:45.179251    4222 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:15:45.183035    4222 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:15:45.186096    4222 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 17:15:45.186112    4222 cni.go:84] Creating CNI manager for ""
	I0923 17:15:45.186138    4222 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:15:45.186147    4222 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:15:45.186171    4222 start.go:340] cluster config:
	{Name:force-systemd-env-831000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-831000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:15:45.189878    4222 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:45.197867    4222 out.go:177] * Starting "force-systemd-env-831000" primary control-plane node in "force-systemd-env-831000" cluster
	I0923 17:15:45.200983    4222 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:15:45.201008    4222 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:15:45.201016    4222 cache.go:56] Caching tarball of preloaded images
	I0923 17:15:45.201075    4222 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:15:45.201081    4222 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:15:45.201137    4222 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/force-systemd-env-831000/config.json ...
	I0923 17:15:45.201148    4222 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/force-systemd-env-831000/config.json: {Name:mka3f8ad66699fc0565b8201f9311aa1e50c3c08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:15:45.201353    4222 start.go:360] acquireMachinesLock for force-systemd-env-831000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:15:45.201389    4222 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "force-systemd-env-831000"
	I0923 17:15:45.201402    4222 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-831000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:15:45.201434    4222 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:15:45.205016    4222 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 17:15:45.222007    4222 start.go:159] libmachine.API.Create for "force-systemd-env-831000" (driver="qemu2")
	I0923 17:15:45.222033    4222 client.go:168] LocalClient.Create starting
	I0923 17:15:45.222104    4222 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:15:45.222138    4222 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:45.222147    4222 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:45.222190    4222 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:15:45.222213    4222 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:45.222223    4222 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:45.222584    4222 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:15:45.383022    4222 main.go:141] libmachine: Creating SSH key...
	I0923 17:15:45.439707    4222 main.go:141] libmachine: Creating Disk image...
	I0923 17:15:45.439713    4222 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:15:45.439935    4222 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0923 17:15:45.449295    4222 main.go:141] libmachine: STDOUT: 
	I0923 17:15:45.449310    4222 main.go:141] libmachine: STDERR: 
	I0923 17:15:45.449389    4222 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2 +20000M
	I0923 17:15:45.457880    4222 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:15:45.457896    4222 main.go:141] libmachine: STDERR: 
	I0923 17:15:45.457912    4222 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0923 17:15:45.457915    4222 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:15:45.457927    4222 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:15:45.457953    4222 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:99:09:19:30:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0923 17:15:45.459837    4222 main.go:141] libmachine: STDOUT: 
	I0923 17:15:45.459854    4222 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:15:45.459874    4222 client.go:171] duration metric: took 237.843041ms to LocalClient.Create
	I0923 17:15:47.461938    4222 start.go:128] duration metric: took 2.260562458s to createHost
	I0923 17:15:47.461965    4222 start.go:83] releasing machines lock for "force-systemd-env-831000", held for 2.26064325s
	W0923 17:15:47.462001    4222 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:47.481138    4222 out.go:177] * Deleting "force-systemd-env-831000" in qemu2 ...
	W0923 17:15:47.505970    4222 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:47.505983    4222 start.go:729] Will try again in 5 seconds ...
	I0923 17:15:52.508065    4222 start.go:360] acquireMachinesLock for force-systemd-env-831000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:15:53.173322    4222 start.go:364] duration metric: took 665.151959ms to acquireMachinesLock for "force-systemd-env-831000"
	I0923 17:15:53.173480    4222 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-831000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:15:53.173713    4222 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:15:53.179460    4222 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 17:15:53.228965    4222 start.go:159] libmachine.API.Create for "force-systemd-env-831000" (driver="qemu2")
	I0923 17:15:53.229007    4222 client.go:168] LocalClient.Create starting
	I0923 17:15:53.229152    4222 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:15:53.229228    4222 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:53.229252    4222 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:53.229321    4222 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:15:53.229367    4222 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:53.229400    4222 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:53.230954    4222 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:15:53.449861    4222 main.go:141] libmachine: Creating SSH key...
	I0923 17:15:53.551134    4222 main.go:141] libmachine: Creating Disk image...
	I0923 17:15:53.551143    4222 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:15:53.551371    4222 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0923 17:15:53.560691    4222 main.go:141] libmachine: STDOUT: 
	I0923 17:15:53.560706    4222 main.go:141] libmachine: STDERR: 
	I0923 17:15:53.560763    4222 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2 +20000M
	I0923 17:15:53.568613    4222 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:15:53.568635    4222 main.go:141] libmachine: STDERR: 
	I0923 17:15:53.568648    4222 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0923 17:15:53.568653    4222 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:15:53.568664    4222 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:15:53.568689    4222 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:14:00:d5:fd:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0923 17:15:53.570349    4222 main.go:141] libmachine: STDOUT: 
	I0923 17:15:53.570363    4222 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:15:53.570376    4222 client.go:171] duration metric: took 341.374375ms to LocalClient.Create
	I0923 17:15:55.571390    4222 start.go:128] duration metric: took 2.397683625s to createHost
	I0923 17:15:55.571477    4222 start.go:83] releasing machines lock for "force-systemd-env-831000", held for 2.398172917s
	W0923 17:15:55.571897    4222 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:55.592645    4222 out.go:201] 
	W0923 17:15:55.603405    4222 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:15:55.603440    4222 out.go:270] * 
	* 
	W0923 17:15:55.605399    4222 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:15:55.616405    4222 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-831000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-831000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-831000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.838542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-831000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-831000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-831000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-23 17:15:55.712547 -0700 PDT m=+2349.744413251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-831000 -n force-systemd-env-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-831000 -n force-systemd-env-831000: exit status 7 (34.503125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-831000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-831000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-831000
--- FAIL: TestForceSystemdEnv (10.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-496000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-496000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-4j7rj" [b9be7132-2c5f-4d6b-bbd2-83dc9a677add] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-4j7rj" [b9be7132-2c5f-4d6b-bbd2-83dc9a677add] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.175656125s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:30156
functional_test.go:1661: error fetching http://192.168.105.4:30156: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
I0923 16:55:54.336387    1596 retry.go:31] will retry after 524.073552ms: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30156: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
I0923 16:55:54.863059    1596 retry.go:31] will retry after 1.518358952s: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30156: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
I0923 16:55:56.383447    1596 retry.go:31] will retry after 1.375014341s: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30156: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
I0923 16:55:57.760707    1596 retry.go:31] will retry after 2.020167032s: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30156: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
I0923 16:55:59.784573    1596 retry.go:31] will retry after 6.236294188s: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
E0923 16:55:59.836883    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:30156: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
I0923 16:56:06.022984    1596 retry.go:31] will retry after 5.857764997s: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30156: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
I0923 16:56:11.883230    1596 retry.go:31] will retry after 7.017416002s: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:30156: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:30156: Get "http://192.168.105.4:30156": dial tcp 192.168.105.4:30156: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-496000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-4j7rj
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-496000/192.168.105.4
Start Time:       Mon, 23 Sep 2024 16:55:48 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://5136baea89081d1df1eb5734fa1649464af14298713e633463363d2748505dd1
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 23 Sep 2024 16:56:01 -0700
Finished:     Mon, 23 Sep 2024 16:56:01 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2hblw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2hblw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  30s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-4j7rj to functional-496000
Normal   Pulled     17s (x3 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    17s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Started    17s (x3 over 30s)  kubelet            Started container echoserver-arm
Warning  BackOff    1s (x3 over 29s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-4j7rj_default(b9be7132-2c5f-4d6b-bbd2-83dc9a677add)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-496000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-496000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.63.178
IPs:                      10.99.63.178
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30156/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-496000 -n functional-496000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-496000                                                                                                 | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4214617441/001:/mount-9p      |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh findmnt                                                                                        | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh findmnt                                                                                        | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT | 23 Sep 24 16:56 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh -- ls                                                                                          | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT | 23 Sep 24 16:56 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh cat                                                                                            | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT | 23 Sep 24 16:56 PDT |
	|           | /mount-9p/test-1727135770710016000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh stat                                                                                           | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT | 23 Sep 24 16:56 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh stat                                                                                           | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT | 23 Sep 24 16:56 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh sudo                                                                                           | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT | 23 Sep 24 16:56 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh findmnt                                                                                        | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-496000                                                                                                 | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1054151009/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh findmnt                                                                                        | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT | 23 Sep 24 16:56 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh -- ls                                                                                          | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT | 23 Sep 24 16:56 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh sudo                                                                                           | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-496000                                                                                                 | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3600535020/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-496000                                                                                                 | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3600535020/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-496000                                                                                                 | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3600535020/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh findmnt                                                                                        | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh findmnt                                                                                        | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT | 23 Sep 24 16:56 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh findmnt                                                                                        | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT | 23 Sep 24 16:56 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-496000 ssh findmnt                                                                                        | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT | 23 Sep 24 16:56 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-496000                                                                                                 | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-496000                                                                                                 | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-496000 --dry-run                                                                                       | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-496000                                                                                                 | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-496000 | jenkins | v1.34.0 | 23 Sep 24 16:56 PDT |                     |
	|           | -p functional-496000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 16:56:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 16:56:17.603104    2722 out.go:345] Setting OutFile to fd 1 ...
	I0923 16:56:17.603210    2722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:56:17.603213    2722 out.go:358] Setting ErrFile to fd 2...
	I0923 16:56:17.603215    2722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:56:17.603338    2722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 16:56:17.604738    2722 out.go:352] Setting JSON to false
	I0923 16:56:17.621628    2722 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1540,"bootTime":1727134237,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 16:56:17.621728    2722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 16:56:17.626408    2722 out.go:177] * [functional-496000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0923 16:56:17.633397    2722 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 16:56:17.633503    2722 notify.go:220] Checking for updates...
	I0923 16:56:17.640397    2722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 16:56:17.643367    2722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 16:56:17.646440    2722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 16:56:17.650401    2722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 16:56:17.653355    2722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 16:56:17.656677    2722 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 16:56:17.656949    2722 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 16:56:17.661340    2722 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0923 16:56:17.669460    2722 start.go:297] selected driver: qemu2
	I0923 16:56:17.669466    2722 start.go:901] validating driver "qemu2" against &{Name:functional-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 16:56:17.669520    2722 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 16:56:17.676393    2722 out.go:201] 
	W0923 16:56:17.680415    2722 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 16:56:17.684407    2722 out.go:201] 
	
	
	==> Docker <==
	Sep 23 23:56:11 functional-496000 dockerd[6086]: time="2024-09-23T23:56:11.745831150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 23:56:11 functional-496000 cri-dockerd[6420]: time="2024-09-23T23:56:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/078bf010c054f697411af5edee4bb5e32eb71206614cd659b28c3d17c728854a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 23:56:13 functional-496000 cri-dockerd[6420]: time="2024-09-23T23:56:13Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 23 23:56:13 functional-496000 dockerd[6086]: time="2024-09-23T23:56:13.317109870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 23:56:13 functional-496000 dockerd[6086]: time="2024-09-23T23:56:13.317163996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 23:56:13 functional-496000 dockerd[6086]: time="2024-09-23T23:56:13.317174955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 23:56:13 functional-496000 dockerd[6086]: time="2024-09-23T23:56:13.317210122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 23:56:13 functional-496000 dockerd[6086]: time="2024-09-23T23:56:13.350874118Z" level=info msg="shim disconnected" id=82d60f8f8595e13177e52e9eeb218564c2f165c7de593046b5ac49d4b6561cea namespace=moby
	Sep 23 23:56:13 functional-496000 dockerd[6086]: time="2024-09-23T23:56:13.350905202Z" level=warning msg="cleaning up after shim disconnected" id=82d60f8f8595e13177e52e9eeb218564c2f165c7de593046b5ac49d4b6561cea namespace=moby
	Sep 23 23:56:13 functional-496000 dockerd[6086]: time="2024-09-23T23:56:13.350909869Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 23:56:13 functional-496000 dockerd[6080]: time="2024-09-23T23:56:13.351021914Z" level=info msg="ignoring event" container=82d60f8f8595e13177e52e9eeb218564c2f165c7de593046b5ac49d4b6561cea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:56:15 functional-496000 dockerd[6080]: time="2024-09-23T23:56:15.280540529Z" level=info msg="ignoring event" container=078bf010c054f697411af5edee4bb5e32eb71206614cd659b28c3d17c728854a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:56:15 functional-496000 dockerd[6086]: time="2024-09-23T23:56:15.280703658Z" level=info msg="shim disconnected" id=078bf010c054f697411af5edee4bb5e32eb71206614cd659b28c3d17c728854a namespace=moby
	Sep 23 23:56:15 functional-496000 dockerd[6086]: time="2024-09-23T23:56:15.280747784Z" level=warning msg="cleaning up after shim disconnected" id=078bf010c054f697411af5edee4bb5e32eb71206614cd659b28c3d17c728854a namespace=moby
	Sep 23 23:56:15 functional-496000 dockerd[6086]: time="2024-09-23T23:56:15.280753493Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 23:56:19 functional-496000 dockerd[6086]: time="2024-09-23T23:56:19.180382694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 23:56:19 functional-496000 dockerd[6086]: time="2024-09-23T23:56:19.180472655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 23:56:19 functional-496000 dockerd[6086]: time="2024-09-23T23:56:19.180488572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 23:56:19 functional-496000 dockerd[6086]: time="2024-09-23T23:56:19.180734912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 23:56:19 functional-496000 cri-dockerd[6420]: time="2024-09-23T23:56:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/82812251cb6a19281914039efb0cc7e8ddfc94ea18e706b64fb12645829a5ba0/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 23:56:19 functional-496000 dockerd[6086]: time="2024-09-23T23:56:19.253600520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 23:56:19 functional-496000 dockerd[6086]: time="2024-09-23T23:56:19.259806095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 23:56:19 functional-496000 dockerd[6086]: time="2024-09-23T23:56:19.259817845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 23:56:19 functional-496000 dockerd[6086]: time="2024-09-23T23:56:19.259869721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 23:56:19 functional-496000 cri-dockerd[6420]: time="2024-09-23T23:56:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5e3b384adfe0111de2ee85edb12c95df31096969472eb47600f41c6a74edc87/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	82d60f8f8595e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 seconds ago        Exited              mount-munger              0                   078bf010c054f       busybox-mount
	bd42a1218a266       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                         15 seconds ago       Running             myfrontend                0                   30f3b06728479       sp-pod
	5136baea89081       72565bf5bbedf                                                                                         18 seconds ago       Exited              echoserver-arm            2                   eb56ca19305e1       hello-node-connect-65d86f57f4-4j7rj
	2c69e8f27e500       72565bf5bbedf                                                                                         25 seconds ago       Exited              echoserver-arm            2                   b0af51e7e9c07       hello-node-64b4f8f9ff-f284x
	073bf075a5526       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                         38 seconds ago       Running             nginx                     0                   799ab13ca5812       nginx-svc
	efe774f10c13b       ba04bb24b9575                                                                                         57 seconds ago       Running             storage-provisioner       3                   7604ba9732e27       storage-provisioner
	dd486a64bb908       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   d5cc08fb4aefc       coredns-7c65d6cfc9-7j92m
	01dc039278cdf       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   21ee4ac934211       coredns-7c65d6cfc9-btl62
	3ceec30aaa312       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   7604ba9732e27       storage-provisioner
	37f00685235b1       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   27bb2df46e71d       kube-proxy-lwmgb
	66252e1565705       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   7f79e77e10f79       kube-controller-manager-functional-496000
	9db91c3c9f521       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   97ead8db4c4be       etcd-functional-496000
	59dc5cbc386ab       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   c3d5f016ed5d1       kube-scheduler-functional-496000
	3020d06ede0a3       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   d2769b52d1f81       kube-apiserver-functional-496000
	2bd54cb43ced6       2f6c962e7b831                                                                                         About a minute ago   Exited              coredns                   1                   49bac75a29c69       coredns-7c65d6cfc9-7j92m
	b32330cf20d12       2f6c962e7b831                                                                                         About a minute ago   Exited              coredns                   1                   888664978fc57       coredns-7c65d6cfc9-btl62
	d92c7594dc079       24a140c548c07                                                                                         About a minute ago   Exited              kube-proxy                1                   56aa2e4be3b70       kube-proxy-lwmgb
	9448640e20d64       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   82683da29fb2d       etcd-functional-496000
	12f53e936d79f       7f8aa378bb47d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   ef53fa21b793f       kube-scheduler-functional-496000
	b60a6fb61dc0a       279f381cb3736                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   24a631b3ef89d       kube-controller-manager-functional-496000
	
	
	==> coredns [01dc039278cd] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38284 - 44862 "HINFO IN 6784958066166223728.2601981250779057415. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023415635s
	[INFO] 10.244.0.1:5361 - 56673 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000107211s
	[INFO] 10.244.0.1:61634 - 23039 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000109503s
	[INFO] 10.244.0.1:30766 - 39610 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001545794s
	[INFO] 10.244.0.1:4164 - 44283 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000068252s
	
	
	==> coredns [2bd54cb43ced] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56593 - 2951 "HINFO IN 9077071435033695607.4329648847391944444. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025098882s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b32330cf20d1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38068 - 59826 "HINFO IN 8546692628759456314.2849419240113105426. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023306669s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [dd486a64bb90] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37493 - 51886 "HINFO IN 7146446370384068982.4086249541142485706. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.427101753s
	[INFO] 10.244.0.1:11186 - 59538 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000176338s
	[INFO] 10.244.0.1:48675 - 24070 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000081794s
	
	
	==> describe nodes <==
	Name:               functional-496000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-496000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=functional-496000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T16_53_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 23:53:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-496000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 23:56:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 23:56:06 +0000   Mon, 23 Sep 2024 23:53:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 23:56:06 +0000   Mon, 23 Sep 2024 23:53:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 23:56:06 +0000   Mon, 23 Sep 2024 23:53:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 23:56:06 +0000   Mon, 23 Sep 2024 23:53:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-496000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea6a155d6e6f4e559be8561db1771f57
	  System UUID:                ea6a155d6e6f4e559be8561db1771f57
	  Boot ID:                    63f2156d-d1cb-4be4-b4d4-8b8d822fca4f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-f284x                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  default                     hello-node-connect-65d86f57f4-4j7rj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 coredns-7c65d6cfc9-7j92m                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m28s
	  kube-system                 coredns-7c65d6cfc9-btl62                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m28s
	  kube-system                 etcd-functional-496000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m33s
	  kube-system                 kube-apiserver-functional-496000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-controller-manager-functional-496000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-lwmgb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-scheduler-functional-496000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-pdmnm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-tgp5h        0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (6%)  340Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m27s                kube-proxy       
	  Normal  Starting                 72s                  kube-proxy       
	  Normal  Starting                 118s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m33s                kubelet          Node functional-496000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m33s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m33s                kubelet          Node functional-496000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m33s                kubelet          Node functional-496000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m33s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m30s                kubelet          Node functional-496000 status is now: NodeReady
	  Normal  RegisteredNode           2m29s                node-controller  Node functional-496000 event: Registered Node functional-496000 in Controller
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node functional-496000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node functional-496000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)  kubelet          Node functional-496000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           116s                 node-controller  Node functional-496000 event: Registered Node functional-496000 in Controller
	  Normal  Starting                 77s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)    kubelet          Node functional-496000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)    kubelet          Node functional-496000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x7 over 76s)    kubelet          Node functional-496000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           70s                  node-controller  Node functional-496000 event: Registered Node functional-496000 in Controller
	
	
	==> dmesg <==
	[ +15.133844] systemd-fstab-generator[5120]: Ignoring "noauto" option for root device
	[  +0.063810] kauditd_printk_skb: 54 callbacks suppressed
	[ +12.335915] systemd-fstab-generator[5547]: Ignoring "noauto" option for root device
	[  +0.055744] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.101126] systemd-fstab-generator[5581]: Ignoring "noauto" option for root device
	[  +0.100095] systemd-fstab-generator[5593]: Ignoring "noauto" option for root device
	[  +0.096528] systemd-fstab-generator[5607]: Ignoring "noauto" option for root device
	[  +5.135512] kauditd_printk_skb: 91 callbacks suppressed
	[Sep23 23:55] systemd-fstab-generator[6302]: Ignoring "noauto" option for root device
	[  +0.094033] systemd-fstab-generator[6314]: Ignoring "noauto" option for root device
	[  +0.091579] systemd-fstab-generator[6326]: Ignoring "noauto" option for root device
	[  +0.088048] systemd-fstab-generator[6385]: Ignoring "noauto" option for root device
	[  +0.216824] systemd-fstab-generator[6578]: Ignoring "noauto" option for root device
	[  +1.120264] systemd-fstab-generator[6699]: Ignoring "noauto" option for root device
	[  +1.073196] kauditd_printk_skb: 189 callbacks suppressed
	[  +5.495238] kauditd_printk_skb: 63 callbacks suppressed
	[ +12.932928] systemd-fstab-generator[8054]: Ignoring "noauto" option for root device
	[  +5.197551] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.316877] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.463315] kauditd_printk_skb: 16 callbacks suppressed
	[  +9.755658] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.478412] kauditd_printk_skb: 32 callbacks suppressed
	[Sep23 23:56] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.662991] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.540578] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [9448640e20d6] <==
	{"level":"info","ts":"2024-09-23T23:54:19.121461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T23:54:19.121681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-23T23:54:19.121731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T23:54:19.121759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-23T23:54:19.121786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T23:54:19.121810Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-23T23:54:19.126433Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T23:54:19.126434Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-496000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T23:54:19.127246Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T23:54:19.127679Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T23:54:19.127873Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T23:54:19.128499Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T23:54:19.129381Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T23:54:19.130755Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-23T23:54:19.131253Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T23:54:48.928579Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-23T23:54:48.928610Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-496000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-23T23:54:48.928653Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T23:54:48.928698Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T23:54:48.942009Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T23:54:48.942034Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T23:54:48.942129Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-23T23:54:48.943549Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-23T23:54:48.943588Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-23T23:54:48.943612Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-496000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [9db91c3c9f52] <==
	{"level":"info","ts":"2024-09-23T23:55:04.189331Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T23:55:04.189440Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T23:55:04.189456Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T23:55:04.189507Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-23T23:55:04.189515Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-23T23:55:05.230747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-23T23:55:05.230876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-23T23:55:05.230937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-23T23:55:05.230962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-23T23:55:05.230976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-23T23:55:05.230992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-23T23:55:05.231008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-23T23:55:05.234754Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T23:55:05.234757Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-496000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T23:55:05.235281Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T23:55:05.235566Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T23:55:05.235645Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T23:55:05.236231Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T23:55:05.236598Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T23:55:05.237727Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-23T23:55:05.237932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-23T23:55:54.310600Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.237791ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:55:54.310685Z","caller":"traceutil/trace.go:171","msg":"trace[1601525259] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:771; }","duration":"170.337752ms","start":"2024-09-23T23:55:54.140338Z","end":"2024-09-23T23:55:54.310675Z","steps":["trace[1601525259] 'range keys from in-memory index tree'  (duration: 170.208457ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:55:54.310865Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.8083ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:55:54.310879Z","caller":"traceutil/trace.go:171","msg":"trace[1499826653] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:771; }","duration":"225.826384ms","start":"2024-09-23T23:55:54.085049Z","end":"2024-09-23T23:55:54.310875Z","steps":["trace[1499826653] 'range keys from in-memory index tree'  (duration: 225.803883ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:56:19 up 2 min,  0 users,  load average: 0.93, 0.55, 0.22
	Linux functional-496000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3020d06ede0a] <==
	I0923 23:55:05.832367       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 23:55:05.832382       1 aggregator.go:171] initial CRD sync complete...
	I0923 23:55:05.832428       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 23:55:05.832435       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 23:55:05.832438       1 cache.go:39] Caches are synced for autoregister controller
	I0923 23:55:05.833332       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 23:55:05.833353       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0923 23:55:05.856639       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 23:55:06.732586       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0923 23:55:06.833616       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0923 23:55:06.834243       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 23:55:06.835699       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 23:55:07.208548       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 23:55:07.212285       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 23:55:07.224608       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 23:55:07.231565       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 23:55:07.233462       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 23:55:27.597558       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.212.34"}
	I0923 23:55:32.870427       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0923 23:55:32.913526       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.112.105"}
	I0923 23:55:37.722304       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.128.43"}
	I0923 23:55:48.135014       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.63.178"}
	I0923 23:56:18.189322       1 controller.go:615] quota admission added evaluator for: namespaces
	I0923 23:56:18.293864       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.224.7"}
	I0923 23:56:18.301666       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.193.180"}
	
	
	==> kube-controller-manager [66252e156570] <==
	I0923 23:55:48.710759       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="28.459µs"
	I0923 23:55:49.720976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="39.209µs"
	I0923 23:55:54.802944       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="24.292µs"
	I0923 23:56:01.046591       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="47.293µs"
	I0923 23:56:01.933433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="23.001µs"
	I0923 23:56:06.938353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-496000"
	I0923 23:56:10.036919       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="68.46µs"
	I0923 23:56:17.023271       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="25.043µs"
	I0923 23:56:18.221700       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.84636ms"
	E0923 23:56:18.221721       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0923 23:56:18.226080       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.26832ms"
	E0923 23:56:18.226100       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0923 23:56:18.226129       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.903491ms"
	E0923 23:56:18.226133       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0923 23:56:18.235298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.155016ms"
	E0923 23:56:18.235407       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0923 23:56:18.235298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.98072ms"
	E0923 23:56:18.235546       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0923 23:56:18.253574       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="16.709844ms"
	I0923 23:56:18.277484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="17.138354ms"
	I0923 23:56:18.277662       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="24.06624ms"
	I0923 23:56:18.289279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.770551ms"
	I0923 23:56:18.289474       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.083µs"
	I0923 23:56:18.289447       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.776093ms"
	I0923 23:56:18.289680       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="193.463µs"
	
	
	==> kube-controller-manager [b60a6fb61dc0] <==
	I0923 23:54:23.008553       1 shared_informer.go:320] Caches are synced for crt configmap
	I0923 23:54:23.009672       1 shared_informer.go:320] Caches are synced for service account
	I0923 23:54:23.009721       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0923 23:54:23.009725       1 shared_informer.go:320] Caches are synced for taint
	I0923 23:54:23.009827       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0923 23:54:23.009864       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-496000"
	I0923 23:54:23.009916       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0923 23:54:23.009729       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0923 23:54:23.010409       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0923 23:54:23.010438       1 shared_informer.go:320] Caches are synced for expand
	I0923 23:54:23.009736       1 shared_informer.go:320] Caches are synced for persistent volume
	I0923 23:54:23.010441       1 shared_informer.go:320] Caches are synced for ephemeral
	I0923 23:54:23.009732       1 shared_informer.go:320] Caches are synced for TTL
	I0923 23:54:23.012841       1 shared_informer.go:320] Caches are synced for namespace
	I0923 23:54:23.083508       1 shared_informer.go:320] Caches are synced for stateful set
	I0923 23:54:23.182197       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 23:54:23.211454       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 23:54:23.260423       1 shared_informer.go:320] Caches are synced for disruption
	I0923 23:54:23.260484       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0923 23:54:23.364981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="381.335706ms"
	I0923 23:54:23.368062       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="2.983445ms"
	I0923 23:54:23.368157       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.876µs"
	I0923 23:54:23.623395       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 23:54:23.687294       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 23:54:23.687342       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [37f00685235b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 23:55:06.620504       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 23:55:06.628642       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0923 23:55:06.628676       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 23:55:06.637230       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 23:55:06.637245       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 23:55:06.637257       1 server_linux.go:169] "Using iptables Proxier"
	I0923 23:55:06.637945       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 23:55:06.638032       1 server.go:483] "Version info" version="v1.31.1"
	I0923 23:55:06.638042       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 23:55:06.638601       1 config.go:199] "Starting service config controller"
	I0923 23:55:06.639066       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 23:55:06.639081       1 config.go:105] "Starting endpoint slice config controller"
	I0923 23:55:06.639085       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 23:55:06.639371       1 config.go:328] "Starting node config controller"
	I0923 23:55:06.639374       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 23:55:06.739435       1 shared_informer.go:320] Caches are synced for node config
	I0923 23:55:06.739436       1 shared_informer.go:320] Caches are synced for service config
	I0923 23:55:06.739447       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d92c7594dc07] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 23:54:21.210067       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 23:54:21.215928       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0923 23:54:21.215959       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 23:54:21.224090       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 23:54:21.224117       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 23:54:21.224132       1 server_linux.go:169] "Using iptables Proxier"
	I0923 23:54:21.224705       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 23:54:21.224803       1 server.go:483] "Version info" version="v1.31.1"
	I0923 23:54:21.224811       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 23:54:21.225371       1 config.go:199] "Starting service config controller"
	I0923 23:54:21.225383       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 23:54:21.225391       1 config.go:105] "Starting endpoint slice config controller"
	I0923 23:54:21.225394       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 23:54:21.225527       1 config.go:328] "Starting node config controller"
	I0923 23:54:21.225534       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 23:54:21.326252       1 shared_informer.go:320] Caches are synced for node config
	I0923 23:54:21.326252       1 shared_informer.go:320] Caches are synced for service config
	I0923 23:54:21.326266       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [12f53e936d79] <==
	I0923 23:54:17.747159       1 serving.go:386] Generated self-signed cert in-memory
	W0923 23:54:19.641337       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 23:54:19.641380       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 23:54:19.641401       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 23:54:19.641408       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 23:54:19.695585       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 23:54:19.695680       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 23:54:19.696632       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 23:54:19.696717       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 23:54:19.696728       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 23:54:19.696793       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 23:54:19.797014       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 23:54:48.934316       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0923 23:54:48.934355       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0923 23:54:48.934429       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [59dc5cbc386a] <==
	I0923 23:55:04.440552       1 serving.go:386] Generated self-signed cert in-memory
	W0923 23:55:05.756397       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 23:55:05.756437       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 23:55:05.756450       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 23:55:05.756457       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 23:55:05.793071       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 23:55:05.793084       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 23:55:05.797477       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 23:55:05.797564       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 23:55:05.797574       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 23:55:05.797631       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 23:55:05.899200       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 23:56:03 functional-496000 kubelet[6706]: I0923 23:56:03.246147    6706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2805c94c-11fb-466e-85a3-cd8bd6f87904\" (UniqueName: \"kubernetes.io/host-path/21e3b52e-9e66-4eed-ac80-1513b6d5e15e-pvc-2805c94c-11fb-466e-85a3-cd8bd6f87904\") pod \"sp-pod\" (UID: \"21e3b52e-9e66-4eed-ac80-1513b6d5e15e\") " pod="default/sp-pod"
	Sep 23 23:56:03 functional-496000 kubelet[6706]: I0923 23:56:03.246183    6706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9dcv\" (UniqueName: \"kubernetes.io/projected/21e3b52e-9e66-4eed-ac80-1513b6d5e15e-kube-api-access-l9dcv\") pod \"sp-pod\" (UID: \"21e3b52e-9e66-4eed-ac80-1513b6d5e15e\") " pod="default/sp-pod"
	Sep 23 23:56:05 functional-496000 kubelet[6706]: I0923 23:56:05.041315    6706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.302370767 podStartE2EDuration="2.04129173s" podCreationTimestamp="2024-09-23 23:56:03 +0000 UTC" firstStartedPulling="2024-09-23 23:56:03.467788466 +0000 UTC m=+60.517799011" lastFinishedPulling="2024-09-23 23:56:04.206709428 +0000 UTC m=+61.256719974" observedRunningTime="2024-09-23 23:56:05.041180436 +0000 UTC m=+62.091190981" watchObservedRunningTime="2024-09-23 23:56:05.04129173 +0000 UTC m=+62.091302318"
	Sep 23 23:56:10 functional-496000 kubelet[6706]: I0923 23:56:10.018168    6706 scope.go:117] "RemoveContainer" containerID="2c69e8f27e5006801474dc5c427465184c3cd45cda37cc86c59e2cd99e2a46a8"
	Sep 23 23:56:10 functional-496000 kubelet[6706]: E0923 23:56:10.018694    6706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-f284x_default(9190e9b5-ea27-4acd-a3b9-be8ec40be992)\"" pod="default/hello-node-64b4f8f9ff-f284x" podUID="9190e9b5-ea27-4acd-a3b9-be8ec40be992"
	Sep 23 23:56:11 functional-496000 kubelet[6706]: I0923 23:56:11.543353    6706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/983ee093-2fc4-4ba7-8e1a-2da1bb03b953-test-volume\") pod \"busybox-mount\" (UID: \"983ee093-2fc4-4ba7-8e1a-2da1bb03b953\") " pod="default/busybox-mount"
	Sep 23 23:56:11 functional-496000 kubelet[6706]: I0923 23:56:11.543395    6706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhzc\" (UniqueName: \"kubernetes.io/projected/983ee093-2fc4-4ba7-8e1a-2da1bb03b953-kube-api-access-prhzc\") pod \"busybox-mount\" (UID: \"983ee093-2fc4-4ba7-8e1a-2da1bb03b953\") " pod="default/busybox-mount"
	Sep 23 23:56:15 functional-496000 kubelet[6706]: I0923 23:56:15.481383    6706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/983ee093-2fc4-4ba7-8e1a-2da1bb03b953-test-volume\") pod \"983ee093-2fc4-4ba7-8e1a-2da1bb03b953\" (UID: \"983ee093-2fc4-4ba7-8e1a-2da1bb03b953\") "
	Sep 23 23:56:15 functional-496000 kubelet[6706]: I0923 23:56:15.481413    6706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prhzc\" (UniqueName: \"kubernetes.io/projected/983ee093-2fc4-4ba7-8e1a-2da1bb03b953-kube-api-access-prhzc\") pod \"983ee093-2fc4-4ba7-8e1a-2da1bb03b953\" (UID: \"983ee093-2fc4-4ba7-8e1a-2da1bb03b953\") "
	Sep 23 23:56:15 functional-496000 kubelet[6706]: I0923 23:56:15.481572    6706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983ee093-2fc4-4ba7-8e1a-2da1bb03b953-test-volume" (OuterVolumeSpecName: "test-volume") pod "983ee093-2fc4-4ba7-8e1a-2da1bb03b953" (UID: "983ee093-2fc4-4ba7-8e1a-2da1bb03b953"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 23:56:15 functional-496000 kubelet[6706]: I0923 23:56:15.483906    6706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983ee093-2fc4-4ba7-8e1a-2da1bb03b953-kube-api-access-prhzc" (OuterVolumeSpecName: "kube-api-access-prhzc") pod "983ee093-2fc4-4ba7-8e1a-2da1bb03b953" (UID: "983ee093-2fc4-4ba7-8e1a-2da1bb03b953"). InnerVolumeSpecName "kube-api-access-prhzc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:56:15 functional-496000 kubelet[6706]: I0923 23:56:15.582033    6706 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/983ee093-2fc4-4ba7-8e1a-2da1bb03b953-test-volume\") on node \"functional-496000\" DevicePath \"\""
	Sep 23 23:56:15 functional-496000 kubelet[6706]: I0923 23:56:15.582056    6706 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-prhzc\" (UniqueName: \"kubernetes.io/projected/983ee093-2fc4-4ba7-8e1a-2da1bb03b953-kube-api-access-prhzc\") on node \"functional-496000\" DevicePath \"\""
	Sep 23 23:56:16 functional-496000 kubelet[6706]: I0923 23:56:16.218328    6706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="078bf010c054f697411af5edee4bb5e32eb71206614cd659b28c3d17c728854a"
	Sep 23 23:56:17 functional-496000 kubelet[6706]: I0923 23:56:17.017378    6706 scope.go:117] "RemoveContainer" containerID="5136baea89081d1df1eb5734fa1649464af14298713e633463363d2748505dd1"
	Sep 23 23:56:17 functional-496000 kubelet[6706]: E0923 23:56:17.017456    6706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-4j7rj_default(b9be7132-2c5f-4d6b-bbd2-83dc9a677add)\"" pod="default/hello-node-connect-65d86f57f4-4j7rj" podUID="b9be7132-2c5f-4d6b-bbd2-83dc9a677add"
	Sep 23 23:56:18 functional-496000 kubelet[6706]: E0923 23:56:18.250262    6706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="983ee093-2fc4-4ba7-8e1a-2da1bb03b953" containerName="mount-munger"
	Sep 23 23:56:18 functional-496000 kubelet[6706]: I0923 23:56:18.250293    6706 memory_manager.go:354] "RemoveStaleState removing state" podUID="983ee093-2fc4-4ba7-8e1a-2da1bb03b953" containerName="mount-munger"
	Sep 23 23:56:18 functional-496000 kubelet[6706]: W0923 23:56:18.253418    6706 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-496000" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'functional-496000' and this object
	Sep 23 23:56:18 functional-496000 kubelet[6706]: E0923 23:56:18.253446    6706 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:functional-496000\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'functional-496000' and this object" logger="UnhandledError"
	Sep 23 23:56:18 functional-496000 kubelet[6706]: I0923 23:56:18.399220    6706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/74a6d439-f833-4703-ae46-1cf634bb0c09-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-pdmnm\" (UID: \"74a6d439-f833-4703-ae46-1cf634bb0c09\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-pdmnm"
	Sep 23 23:56:18 functional-496000 kubelet[6706]: I0923 23:56:18.399291    6706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/342b29d7-6503-4601-b0b3-6b76d192d1d7-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-tgp5h\" (UID: \"342b29d7-6503-4601-b0b3-6b76d192d1d7\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-tgp5h"
	Sep 23 23:56:18 functional-496000 kubelet[6706]: I0923 23:56:18.399308    6706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcxkz\" (UniqueName: \"kubernetes.io/projected/342b29d7-6503-4601-b0b3-6b76d192d1d7-kube-api-access-pcxkz\") pod \"kubernetes-dashboard-695b96c756-tgp5h\" (UID: \"342b29d7-6503-4601-b0b3-6b76d192d1d7\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-tgp5h"
	Sep 23 23:56:18 functional-496000 kubelet[6706]: I0923 23:56:18.399323    6706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cprz\" (UniqueName: \"kubernetes.io/projected/74a6d439-f833-4703-ae46-1cf634bb0c09-kube-api-access-4cprz\") pod \"dashboard-metrics-scraper-c5db448b4-pdmnm\" (UID: \"74a6d439-f833-4703-ae46-1cf634bb0c09\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-pdmnm"
	Sep 23 23:56:19 functional-496000 kubelet[6706]: I0923 23:56:19.308879    6706 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5e3b384adfe0111de2ee85edb12c95df31096969472eb47600f41c6a74edc87"
	
	
	==> storage-provisioner [3ceec30aaa31] <==
	I0923 23:55:06.548835       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0923 23:55:06.549452       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [efe774f10c13] <==
	I0923 23:55:22.098444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 23:55:22.103071       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 23:55:22.103144       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 23:55:39.517493       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 23:55:39.518847       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8411473d-87aa-406c-9fde-02460858e01e", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-496000_0fe3979a-1314-48e1-9998-43cf88dd7748 became leader
	I0923 23:55:39.520336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-496000_0fe3979a-1314-48e1-9998-43cf88dd7748!
	I0923 23:55:39.622616       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-496000_0fe3979a-1314-48e1-9998-43cf88dd7748!
	I0923 23:55:49.560412       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0923 23:55:49.560447       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    1e5eeb21-3a10-4a73-8077-3acca57b580f 306 0 2024-09-23 23:53:51 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-23 23:53:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-2805c94c-11fb-466e-85a3-cd8bd6f87904 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  2805c94c-11fb-466e-85a3-cd8bd6f87904 753 0 2024-09-23 23:55:49 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-23 23:55:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-23 23:55:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0923 23:55:49.560732       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-2805c94c-11fb-466e-85a3-cd8bd6f87904" provisioned
	I0923 23:55:49.560740       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0923 23:55:49.560743       1 volume_store.go:212] Trying to save persistentvolume "pvc-2805c94c-11fb-466e-85a3-cd8bd6f87904"
	I0923 23:55:49.561114       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2805c94c-11fb-466e-85a3-cd8bd6f87904", APIVersion:"v1", ResourceVersion:"753", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0923 23:55:49.565922       1 volume_store.go:219] persistentvolume "pvc-2805c94c-11fb-466e-85a3-cd8bd6f87904" saved
	I0923 23:55:49.566218       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"2805c94c-11fb-466e-85a3-cd8bd6f87904", APIVersion:"v1", ResourceVersion:"753", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-2805c94c-11fb-466e-85a3-cd8bd6f87904
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-496000 -n functional-496000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-496000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-pdmnm kubernetes-dashboard-695b96c756-tgp5h
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-496000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-pdmnm kubernetes-dashboard-695b96c756-tgp5h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-496000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-pdmnm kubernetes-dashboard-695b96c756-tgp5h: exit status 1 (45.558666ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-496000/192.168.105.4
	Start Time:       Mon, 23 Sep 2024 16:56:11 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.14
	IPs:
	  IP:  10.244.0.14
	Containers:
	  mount-munger:
	    Container ID:  docker://82d60f8f8595e13177e52e9eeb218564c2f165c7de593046b5ac49d4b6561cea
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 23 Sep 2024 16:56:13 -0700
	      Finished:     Mon, 23 Sep 2024 16:56:13 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-prhzc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-prhzc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  8s    default-scheduler  Successfully assigned default/busybox-mount to functional-496000
	  Normal  Pulling    8s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.484s (1.484s including waiting). Image size: 3547125 bytes.
	  Normal  Created    6s    kubelet            Created container mount-munger
	  Normal  Started    6s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-pdmnm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-tgp5h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-496000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-pdmnm kubernetes-dashboard-695b96c756-tgp5h: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (32.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (64.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 node stop m02 -v=7 --alsologtostderr
E0923 17:00:32.811891    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:32.819192    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:32.832580    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:32.855937    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:32.897762    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:32.981115    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:33.144000    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:33.467341    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:34.110577    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:35.393967    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:37.957357    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:39.312674    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-515000 node stop m02 -v=7 --alsologtostderr: (12.193256125s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr
E0923 17:00:43.080802    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:00:53.323943    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr: (25.959769084s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000
E0923 17:01:07.037756    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:01:13.806822    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000: exit status 3 (25.980915583s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 17:01:32.553549    3409 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0923 17:01:32.553560    3409 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-515000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (64.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0923 17:01:54.769091    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (25.976052708s)
ha_test.go:413: expected profile "ha-515000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-515000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-515000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-515000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":
false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\
"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000: exit status 3 (25.961367083s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 17:02:24.489488    3423 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0923 17:02:24.489497    3423 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-515000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (82.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-515000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.083140125s)

                                                
                                                
-- stdout --
	* Starting "ha-515000-m02" control-plane node in "ha-515000" cluster
	* Restarting existing qemu2 VM for "ha-515000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-515000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:02:24.521996    3432 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:02:24.522252    3432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:02:24.522256    3432 out.go:358] Setting ErrFile to fd 2...
	I0923 17:02:24.522258    3432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:02:24.522383    3432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:02:24.522669    3432 mustload.go:65] Loading cluster: ha-515000
	I0923 17:02:24.522909    3432 config.go:182] Loaded profile config "ha-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0923 17:02:24.523157    3432 host.go:58] "ha-515000-m02" host status: Stopped
	I0923 17:02:24.527097    3432 out.go:177] * Starting "ha-515000-m02" control-plane node in "ha-515000" cluster
	I0923 17:02:24.529996    3432 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:02:24.530010    3432 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:02:24.530017    3432 cache.go:56] Caching tarball of preloaded images
	I0923 17:02:24.530081    3432 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:02:24.530088    3432 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:02:24.530148    3432 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/ha-515000/config.json ...
	I0923 17:02:24.530924    3432 start.go:360] acquireMachinesLock for ha-515000-m02: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:02:24.530982    3432 start.go:364] duration metric: took 27µs to acquireMachinesLock for "ha-515000-m02"
	I0923 17:02:24.530989    3432 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:02:24.530992    3432 fix.go:54] fixHost starting: m02
	I0923 17:02:24.531093    3432 fix.go:112] recreateIfNeeded on ha-515000-m02: state=Stopped err=<nil>
	W0923 17:02:24.531099    3432 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:02:24.534070    3432 out.go:177] * Restarting existing qemu2 VM for "ha-515000-m02" ...
	I0923 17:02:24.537972    3432 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:02:24.538006    3432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:55:5f:9c:bf:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/disk.qcow2
	I0923 17:02:24.540316    3432 main.go:141] libmachine: STDOUT: 
	I0923 17:02:24.540331    3432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:02:24.540355    3432 fix.go:56] duration metric: took 9.360959ms for fixHost
	I0923 17:02:24.540364    3432 start.go:83] releasing machines lock for "ha-515000-m02", held for 9.373917ms
	W0923 17:02:24.540371    3432 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:02:24.540397    3432 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:02:24.540401    3432 start.go:729] Will try again in 5 seconds ...
	I0923 17:02:29.542346    3432 start.go:360] acquireMachinesLock for ha-515000-m02: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:02:29.542512    3432 start.go:364] duration metric: took 118.667µs to acquireMachinesLock for "ha-515000-m02"
	I0923 17:02:29.542556    3432 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:02:29.542562    3432 fix.go:54] fixHost starting: m02
	I0923 17:02:29.542744    3432 fix.go:112] recreateIfNeeded on ha-515000-m02: state=Stopped err=<nil>
	W0923 17:02:29.542751    3432 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:02:29.547055    3432 out.go:177] * Restarting existing qemu2 VM for "ha-515000-m02" ...
	I0923 17:02:29.551155    3432 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:02:29.551216    3432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:55:5f:9c:bf:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/disk.qcow2
	I0923 17:02:29.553705    3432 main.go:141] libmachine: STDOUT: 
	I0923 17:02:29.553728    3432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:02:29.553752    3432 fix.go:56] duration metric: took 11.190167ms for fixHost
	I0923 17:02:29.553756    3432 start.go:83] releasing machines lock for "ha-515000-m02", held for 11.236167ms
	W0923 17:02:29.553796    3432 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-515000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-515000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:02:29.558099    3432 out.go:201] 
	W0923 17:02:29.562122    3432 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:02:29.562136    3432 out.go:270] * 
	* 
	W0923 17:02:29.563921    3432 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:02:29.568094    3432 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0923 17:02:24.521996    3432 out.go:345] Setting OutFile to fd 1 ...
I0923 17:02:24.522252    3432 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 17:02:24.522256    3432 out.go:358] Setting ErrFile to fd 2...
I0923 17:02:24.522258    3432 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 17:02:24.522383    3432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
I0923 17:02:24.522669    3432 mustload.go:65] Loading cluster: ha-515000
I0923 17:02:24.522909    3432 config.go:182] Loaded profile config "ha-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0923 17:02:24.523157    3432 host.go:58] "ha-515000-m02" host status: Stopped
I0923 17:02:24.527097    3432 out.go:177] * Starting "ha-515000-m02" control-plane node in "ha-515000" cluster
I0923 17:02:24.529996    3432 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 17:02:24.530010    3432 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0923 17:02:24.530017    3432 cache.go:56] Caching tarball of preloaded images
I0923 17:02:24.530081    3432 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0923 17:02:24.530088    3432 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0923 17:02:24.530148    3432 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/ha-515000/config.json ...
I0923 17:02:24.530924    3432 start.go:360] acquireMachinesLock for ha-515000-m02: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 17:02:24.530982    3432 start.go:364] duration metric: took 27µs to acquireMachinesLock for "ha-515000-m02"
I0923 17:02:24.530989    3432 start.go:96] Skipping create...Using existing machine configuration
I0923 17:02:24.530992    3432 fix.go:54] fixHost starting: m02
I0923 17:02:24.531093    3432 fix.go:112] recreateIfNeeded on ha-515000-m02: state=Stopped err=<nil>
W0923 17:02:24.531099    3432 fix.go:138] unexpected machine state, will restart: <nil>
I0923 17:02:24.534070    3432 out.go:177] * Restarting existing qemu2 VM for "ha-515000-m02" ...
I0923 17:02:24.537972    3432 qemu.go:418] Using hvf for hardware acceleration
I0923 17:02:24.538006    3432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:55:5f:9c:bf:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/disk.qcow2
I0923 17:02:24.540316    3432 main.go:141] libmachine: STDOUT: 
I0923 17:02:24.540331    3432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0923 17:02:24.540355    3432 fix.go:56] duration metric: took 9.360959ms for fixHost
I0923 17:02:24.540364    3432 start.go:83] releasing machines lock for "ha-515000-m02", held for 9.373917ms
W0923 17:02:24.540371    3432 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0923 17:02:24.540397    3432 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0923 17:02:24.540401    3432 start.go:729] Will try again in 5 seconds ...
I0923 17:02:29.542346    3432 start.go:360] acquireMachinesLock for ha-515000-m02: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 17:02:29.542512    3432 start.go:364] duration metric: took 118.667µs to acquireMachinesLock for "ha-515000-m02"
I0923 17:02:29.542556    3432 start.go:96] Skipping create...Using existing machine configuration
I0923 17:02:29.542562    3432 fix.go:54] fixHost starting: m02
I0923 17:02:29.542744    3432 fix.go:112] recreateIfNeeded on ha-515000-m02: state=Stopped err=<nil>
W0923 17:02:29.542751    3432 fix.go:138] unexpected machine state, will restart: <nil>
I0923 17:02:29.547055    3432 out.go:177] * Restarting existing qemu2 VM for "ha-515000-m02" ...
I0923 17:02:29.551155    3432 qemu.go:418] Using hvf for hardware acceleration
I0923 17:02:29.551216    3432 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:55:5f:9c:bf:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000-m02/disk.qcow2
I0923 17:02:29.553705    3432 main.go:141] libmachine: STDOUT: 
I0923 17:02:29.553728    3432 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0923 17:02:29.553752    3432 fix.go:56] duration metric: took 11.190167ms for fixHost
I0923 17:02:29.553756    3432 start.go:83] releasing machines lock for "ha-515000-m02", held for 11.236167ms
W0923 17:02:29.553796    3432 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-515000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-515000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0923 17:02:29.558099    3432 out.go:201] 
W0923 17:02:29.562122    3432 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0923 17:02:29.562136    3432 out.go:270] * 
* 
W0923 17:02:29.563921    3432 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0923 17:02:29.568094    3432 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-515000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr: (25.959076292s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
E0923 17:03:16.689537    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:448: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (25.959234583s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:450: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000: exit status 3 (25.986319209s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 17:03:47.477429    3448 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0923 17:03:47.477450    3448 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-515000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (82.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-515000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-515000 -v=7 --alsologtostderr
E0923 17:05:32.800915    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:05:39.304966    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:06:00.528082    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-515000 -v=7 --alsologtostderr: (3m49.013887458s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-515000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-515000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.222194417s)

                                                
                                                
-- stdout --
	* [ha-515000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-515000" primary control-plane node in "ha-515000" cluster
	* Restarting existing qemu2 VM for "ha-515000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-515000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:07:39.563796    3499 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:07:39.564003    3499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:07:39.564008    3499 out.go:358] Setting ErrFile to fd 2...
	I0923 17:07:39.564011    3499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:07:39.564162    3499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:07:39.565498    3499 out.go:352] Setting JSON to false
	I0923 17:07:39.585830    3499 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2222,"bootTime":1727134237,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:07:39.585905    3499 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:07:39.590487    3499 out.go:177] * [ha-515000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:07:39.598508    3499 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:07:39.598562    3499 notify.go:220] Checking for updates...
	I0923 17:07:39.605467    3499 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:07:39.609386    3499 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:07:39.612411    3499 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:07:39.615455    3499 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:07:39.618448    3499 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:07:39.621749    3499 config.go:182] Loaded profile config "ha-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:07:39.621806    3499 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:07:39.626477    3499 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:07:39.633429    3499 start.go:297] selected driver: qemu2
	I0923 17:07:39.633435    3499 start.go:901] validating driver "qemu2" against &{Name:ha-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-515000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:07:39.633531    3499 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:07:39.636131    3499 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:07:39.636155    3499 cni.go:84] Creating CNI manager for ""
	I0923 17:07:39.636178    3499 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0923 17:07:39.636224    3499 start.go:340] cluster config:
	{Name:ha-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-515000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:07:39.640165    3499 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:07:39.648434    3499 out.go:177] * Starting "ha-515000" primary control-plane node in "ha-515000" cluster
	I0923 17:07:39.652487    3499 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:07:39.652505    3499 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:07:39.652512    3499 cache.go:56] Caching tarball of preloaded images
	I0923 17:07:39.652579    3499 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:07:39.652585    3499 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:07:39.652663    3499 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/ha-515000/config.json ...
	I0923 17:07:39.653140    3499 start.go:360] acquireMachinesLock for ha-515000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:07:39.653175    3499 start.go:364] duration metric: took 28.708µs to acquireMachinesLock for "ha-515000"
	I0923 17:07:39.653185    3499 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:07:39.653190    3499 fix.go:54] fixHost starting: 
	I0923 17:07:39.653313    3499 fix.go:112] recreateIfNeeded on ha-515000: state=Stopped err=<nil>
	W0923 17:07:39.653325    3499 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:07:39.656412    3499 out.go:177] * Restarting existing qemu2 VM for "ha-515000" ...
	I0923 17:07:39.664287    3499 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:07:39.664322    3499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:6b:c0:03:d4:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/disk.qcow2
	I0923 17:07:39.666329    3499 main.go:141] libmachine: STDOUT: 
	I0923 17:07:39.666348    3499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:07:39.666382    3499 fix.go:56] duration metric: took 13.189917ms for fixHost
	I0923 17:07:39.666388    3499 start.go:83] releasing machines lock for "ha-515000", held for 13.208959ms
	W0923 17:07:39.666395    3499 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:07:39.666431    3499 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:07:39.666436    3499 start.go:729] Will try again in 5 seconds ...
	I0923 17:07:44.667447    3499 start.go:360] acquireMachinesLock for ha-515000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:07:44.667961    3499 start.go:364] duration metric: took 394.125µs to acquireMachinesLock for "ha-515000"
	I0923 17:07:44.668112    3499 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:07:44.668133    3499 fix.go:54] fixHost starting: 
	I0923 17:07:44.668894    3499 fix.go:112] recreateIfNeeded on ha-515000: state=Stopped err=<nil>
	W0923 17:07:44.668923    3499 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:07:44.672431    3499 out.go:177] * Restarting existing qemu2 VM for "ha-515000" ...
	I0923 17:07:44.680384    3499 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:07:44.680618    3499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:6b:c0:03:d4:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/disk.qcow2
	I0923 17:07:44.690229    3499 main.go:141] libmachine: STDOUT: 
	I0923 17:07:44.690309    3499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:07:44.690416    3499 fix.go:56] duration metric: took 22.285083ms for fixHost
	I0923 17:07:44.690439    3499 start.go:83] releasing machines lock for "ha-515000", held for 22.455125ms
	W0923 17:07:44.690649    3499 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-515000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-515000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:07:44.698371    3499 out.go:201] 
	W0923 17:07:44.701347    3499 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:07:44.701376    3499 out.go:270] * 
	* 
	W0923 17:07:44.703866    3499 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:07:44.710351    3499 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-515000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-515000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000: exit status 7 (32.794083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-515000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-515000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.507458ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-515000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-515000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:07:44.850194    3514 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:07:44.850427    3514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:07:44.850431    3514 out.go:358] Setting ErrFile to fd 2...
	I0923 17:07:44.850433    3514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:07:44.850580    3514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:07:44.850802    3514 mustload.go:65] Loading cluster: ha-515000
	I0923 17:07:44.851049    3514 config.go:182] Loaded profile config "ha-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0923 17:07:44.851365    3514 out.go:270] ! The control-plane node ha-515000 host is not running (will try others): state=Stopped
	! The control-plane node ha-515000 host is not running (will try others): state=Stopped
	W0923 17:07:44.851474    3514 out.go:270] ! The control-plane node ha-515000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-515000-m02 host is not running (will try others): state=Stopped
	I0923 17:07:44.856283    3514 out.go:177] * The control-plane node ha-515000-m03 host is not running: state=Stopped
	I0923 17:07:44.860190    3514 out.go:177]   To start a cluster, run: "minikube start -p ha-515000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-515000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr: exit status 7 (30.443625ms)

                                                
                                                
-- stdout --
	ha-515000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-515000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-515000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-515000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:07:44.892310    3516 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:07:44.892470    3516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:07:44.892474    3516 out.go:358] Setting ErrFile to fd 2...
	I0923 17:07:44.892476    3516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:07:44.892616    3516 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:07:44.892739    3516 out.go:352] Setting JSON to false
	I0923 17:07:44.892752    3516 mustload.go:65] Loading cluster: ha-515000
	I0923 17:07:44.892817    3516 notify.go:220] Checking for updates...
	I0923 17:07:44.892989    3516 config.go:182] Loaded profile config "ha-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:07:44.892998    3516 status.go:174] checking status of ha-515000 ...
	I0923 17:07:44.893233    3516 status.go:364] ha-515000 host status = "Stopped" (err=<nil>)
	I0923 17:07:44.893236    3516 status.go:377] host is not running, skipping remaining checks
	I0923 17:07:44.893238    3516 status.go:176] ha-515000 status: &{Name:ha-515000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 17:07:44.893248    3516 status.go:174] checking status of ha-515000-m02 ...
	I0923 17:07:44.893338    3516 status.go:364] ha-515000-m02 host status = "Stopped" (err=<nil>)
	I0923 17:07:44.893340    3516 status.go:377] host is not running, skipping remaining checks
	I0923 17:07:44.893342    3516 status.go:176] ha-515000-m02 status: &{Name:ha-515000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 17:07:44.893346    3516 status.go:174] checking status of ha-515000-m03 ...
	I0923 17:07:44.893440    3516 status.go:364] ha-515000-m03 host status = "Stopped" (err=<nil>)
	I0923 17:07:44.893442    3516 status.go:377] host is not running, skipping remaining checks
	I0923 17:07:44.893444    3516 status.go:176] ha-515000-m03 status: &{Name:ha-515000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 17:07:44.893447    3516 status.go:174] checking status of ha-515000-m04 ...
	I0923 17:07:44.893538    3516 status.go:364] ha-515000-m04 host status = "Stopped" (err=<nil>)
	I0923 17:07:44.893541    3516 status.go:377] host is not running, skipping remaining checks
	I0923 17:07:44.893542    3516 status.go:176] ha-515000-m04 status: &{Name:ha-515000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000: exit status 7 (30.172541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-515000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-515000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-515000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-515000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-515000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000: exit status 7 (29.505375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-515000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 stop -v=7 --alsologtostderr
E0923 17:10:32.792829    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:10:39.295000    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-515000 stop -v=7 --alsologtostderr: (3m21.969136084s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr: exit status 7 (65.3785ms)

                                                
                                                
-- stdout --
	ha-515000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-515000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-515000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-515000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:11:07.032391    3560 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:11:07.032614    3560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:11:07.032619    3560 out.go:358] Setting ErrFile to fd 2...
	I0923 17:11:07.032622    3560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:11:07.032776    3560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:11:07.032956    3560 out.go:352] Setting JSON to false
	I0923 17:11:07.032969    3560 mustload.go:65] Loading cluster: ha-515000
	I0923 17:11:07.033013    3560 notify.go:220] Checking for updates...
	I0923 17:11:07.033292    3560 config.go:182] Loaded profile config "ha-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:11:07.033306    3560 status.go:174] checking status of ha-515000 ...
	I0923 17:11:07.033651    3560 status.go:364] ha-515000 host status = "Stopped" (err=<nil>)
	I0923 17:11:07.033657    3560 status.go:377] host is not running, skipping remaining checks
	I0923 17:11:07.033659    3560 status.go:176] ha-515000 status: &{Name:ha-515000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 17:11:07.033673    3560 status.go:174] checking status of ha-515000-m02 ...
	I0923 17:11:07.033804    3560 status.go:364] ha-515000-m02 host status = "Stopped" (err=<nil>)
	I0923 17:11:07.033809    3560 status.go:377] host is not running, skipping remaining checks
	I0923 17:11:07.033812    3560 status.go:176] ha-515000-m02 status: &{Name:ha-515000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 17:11:07.033817    3560 status.go:174] checking status of ha-515000-m03 ...
	I0923 17:11:07.033953    3560 status.go:364] ha-515000-m03 host status = "Stopped" (err=<nil>)
	I0923 17:11:07.033958    3560 status.go:377] host is not running, skipping remaining checks
	I0923 17:11:07.033960    3560 status.go:176] ha-515000-m03 status: &{Name:ha-515000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 17:11:07.033965    3560 status.go:174] checking status of ha-515000-m04 ...
	I0923 17:11:07.034092    3560 status.go:364] ha-515000-m04 host status = "Stopped" (err=<nil>)
	I0923 17:11:07.034097    3560 status.go:377] host is not running, skipping remaining checks
	I0923 17:11:07.034099    3560 status.go:176] ha-515000-m04 status: &{Name:ha-515000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr": ha-515000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-515000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-515000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-515000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr": ha-515000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-515000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-515000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-515000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr": ha-515000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-515000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-515000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-515000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000: exit status 7 (32.814708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-515000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-515000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-515000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.182301208s)

                                                
                                                
-- stdout --
	* [ha-515000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-515000" primary control-plane node in "ha-515000" cluster
	* Restarting existing qemu2 VM for "ha-515000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-515000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:11:07.096374    3564 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:11:07.096513    3564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:11:07.096517    3564 out.go:358] Setting ErrFile to fd 2...
	I0923 17:11:07.096519    3564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:11:07.096657    3564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:11:07.097715    3564 out.go:352] Setting JSON to false
	I0923 17:11:07.113843    3564 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2430,"bootTime":1727134237,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:11:07.113916    3564 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:11:07.119194    3564 out.go:177] * [ha-515000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:11:07.126083    3564 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:11:07.126142    3564 notify.go:220] Checking for updates...
	I0923 17:11:07.133055    3564 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:11:07.136050    3564 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:11:07.139139    3564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:11:07.142109    3564 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:11:07.145080    3564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:11:07.148418    3564 config.go:182] Loaded profile config "ha-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:11:07.148672    3564 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:11:07.152965    3564 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:11:07.163610    3564 start.go:297] selected driver: qemu2
	I0923 17:11:07.163616    3564 start.go:901] validating driver "qemu2" against &{Name:ha-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-515000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:11:07.163690    3564 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:11:07.166056    3564 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:11:07.166081    3564 cni.go:84] Creating CNI manager for ""
	I0923 17:11:07.166100    3564 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0923 17:11:07.166147    3564 start.go:340] cluster config:
	{Name:ha-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-515000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:11:07.169878    3564 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:11:07.178064    3564 out.go:177] * Starting "ha-515000" primary control-plane node in "ha-515000" cluster
	I0923 17:11:07.181949    3564 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:11:07.181967    3564 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:11:07.181974    3564 cache.go:56] Caching tarball of preloaded images
	I0923 17:11:07.182053    3564 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:11:07.182058    3564 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:11:07.182128    3564 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/ha-515000/config.json ...
	I0923 17:11:07.182615    3564 start.go:360] acquireMachinesLock for ha-515000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:11:07.182650    3564 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "ha-515000"
	I0923 17:11:07.182660    3564 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:11:07.182665    3564 fix.go:54] fixHost starting: 
	I0923 17:11:07.182785    3564 fix.go:112] recreateIfNeeded on ha-515000: state=Stopped err=<nil>
	W0923 17:11:07.182795    3564 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:11:07.186143    3564 out.go:177] * Restarting existing qemu2 VM for "ha-515000" ...
	I0923 17:11:07.193125    3564 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:11:07.193166    3564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:6b:c0:03:d4:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/disk.qcow2
	I0923 17:11:07.195332    3564 main.go:141] libmachine: STDOUT: 
	I0923 17:11:07.195348    3564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:11:07.195377    3564 fix.go:56] duration metric: took 12.711583ms for fixHost
	I0923 17:11:07.195381    3564 start.go:83] releasing machines lock for "ha-515000", held for 12.727083ms
	W0923 17:11:07.195388    3564 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:11:07.195427    3564 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:11:07.195432    3564 start.go:729] Will try again in 5 seconds ...
	I0923 17:11:12.197480    3564 start.go:360] acquireMachinesLock for ha-515000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:11:12.197942    3564 start.go:364] duration metric: took 363.167µs to acquireMachinesLock for "ha-515000"
	I0923 17:11:12.198067    3564 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:11:12.198091    3564 fix.go:54] fixHost starting: 
	I0923 17:11:12.198793    3564 fix.go:112] recreateIfNeeded on ha-515000: state=Stopped err=<nil>
	W0923 17:11:12.198824    3564 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:11:12.203257    3564 out.go:177] * Restarting existing qemu2 VM for "ha-515000" ...
	I0923 17:11:12.208216    3564 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:11:12.208428    3564 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:6b:c0:03:d4:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/ha-515000/disk.qcow2
	I0923 17:11:12.217297    3564 main.go:141] libmachine: STDOUT: 
	I0923 17:11:12.217380    3564 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:11:12.217465    3564 fix.go:56] duration metric: took 19.378375ms for fixHost
	I0923 17:11:12.217484    3564 start.go:83] releasing machines lock for "ha-515000", held for 19.513417ms
	W0923 17:11:12.217679    3564 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-515000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-515000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:11:12.225185    3564 out.go:201] 
	W0923 17:11:12.226779    3564 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:11:12.226803    3564 out.go:270] * 
	* 
	W0923 17:11:12.229231    3564 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:11:12.238234    3564 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-515000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000: exit status 7 (70.397042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-515000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-515000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-515000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-515000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-515000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000: exit status 7 (30.430625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-515000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-515000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-515000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.071542ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-515000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-515000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:11:12.430255    3579 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:11:12.430423    3579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:11:12.430427    3579 out.go:358] Setting ErrFile to fd 2...
	I0923 17:11:12.430429    3579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:11:12.430550    3579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:11:12.430786    3579 mustload.go:65] Loading cluster: ha-515000
	I0923 17:11:12.431042    3579 config.go:182] Loaded profile config "ha-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0923 17:11:12.431348    3579 out.go:270] ! The control-plane node ha-515000 host is not running (will try others): state=Stopped
	! The control-plane node ha-515000 host is not running (will try others): state=Stopped
	W0923 17:11:12.431457    3579 out.go:270] ! The control-plane node ha-515000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-515000-m02 host is not running (will try others): state=Stopped
	I0923 17:11:12.435738    3579 out.go:177] * The control-plane node ha-515000-m03 host is not running: state=Stopped
	I0923 17:11:12.439728    3579 out.go:177]   To start a cluster, run: "minikube start -p ha-515000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-515000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-515000 -n ha-515000: exit status 7 (30.22925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-515000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-496000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-496000 --driver=qemu2 : exit status 80 (10.054870708s)

                                                
                                                
-- stdout --
	* [image-496000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-496000" primary control-plane node in "image-496000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-496000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-496000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-496000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-496000 -n image-496000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-496000 -n image-496000: exit status 7 (68.310209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-496000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (10.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-945000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-945000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (10.152367708s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2aa1193a-e98d-43ee-bcf0-559e987afc93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-945000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"616e9844-92ad-4a21-b3a8-81c4e3e91a56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19696"}}
	{"specversion":"1.0","id":"31fa1213-6abc-44f7-8793-e9944a281cd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig"}}
	{"specversion":"1.0","id":"4df74ac7-6717-455c-b730-57fbd712771a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"eb9c6658-a61f-4cf2-bc0c-c63a4b90684d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"18c7d184-8536-47cb-a294-4eea199d112e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube"}}
	{"specversion":"1.0","id":"6615fbb8-d161-4e47-a187-37cfa6cc6097","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"214f43ae-8b80-4574-8345-bdcbd4fb0157","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ddd77744-395c-4ecd-921c-8f5885b01aa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"97f500b9-30fe-442c-bb02-3fd2810d073b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-945000\" primary control-plane node in \"json-output-945000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d3b4625-b0d0-406a-97ed-065fffac3496","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"eb1a029f-9efd-4cc6-b621-3ce40834cfea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-945000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff02d5eb-d4c0-409b-9d8b-7acb597df693","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"61182009-98ec-41ed-8f52-6c905bad134a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"cf039a57-fcd8-4546-b15a-40778767c127","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-945000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"ae3643b9-b036-4dfe-81b5-70bf473e7a11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"bbbeb062-c062-442b-b126-d15fc7990270","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-945000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (10.15s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-945000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-945000 --output=json --user=testUser: exit status 83 (79.563125ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c1a71e36-6ec5-4d2f-8536-4f6011987777","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-945000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"24beb3df-c3f8-4cac-b0b6-2c01bb6b21df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-945000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-945000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-945000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-945000 --output=json --user=testUser: exit status 83 (44.574375ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-945000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-945000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-945000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-945000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-646000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-646000 --driver=qemu2 : exit status 80 (9.869316917s)

                                                
                                                
-- stdout --
	* [first-646000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-646000" primary control-plane node in "first-646000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-646000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-646000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-23 17:11:46.685257 -0700 PDT m=+2100.709013168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-648000 -n second-648000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-648000 -n second-648000: exit status 85 (81.51275ms)

                                                
                                                
-- stdout --
	* Profile "second-648000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-648000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-648000" host is not running, skipping log retrieval (state="* Profile \"second-648000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-648000\"")
helpers_test.go:175: Cleaning up "second-648000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-648000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-23 17:11:46.875902 -0700 PDT m=+2100.899664876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-646000 -n first-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-646000 -n first-646000: exit status 7 (30.053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-646000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-646000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-646000
--- FAIL: TestMinikubeProfile (10.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-939000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-939000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.145677s)

                                                
                                                
-- stdout --
	* [mount-start-1-939000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-939000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-939000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-939000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-939000 -n mount-start-1-939000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-939000 -n mount-start-1-939000: exit status 7 (69.316167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-939000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.22s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-317000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0923 17:12:02.380618    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-317000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.941105s)

                                                
                                                
-- stdout --
	* [multinode-317000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-317000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:11:57.414694    3722 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:11:57.414831    3722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:11:57.414835    3722 out.go:358] Setting ErrFile to fd 2...
	I0923 17:11:57.414837    3722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:11:57.414954    3722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:11:57.415968    3722 out.go:352] Setting JSON to false
	I0923 17:11:57.432138    3722 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2480,"bootTime":1727134237,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:11:57.432238    3722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:11:57.439593    3722 out.go:177] * [multinode-317000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:11:57.447533    3722 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:11:57.447562    3722 notify.go:220] Checking for updates...
	I0923 17:11:57.455446    3722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:11:57.458526    3722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:11:57.461497    3722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:11:57.464443    3722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:11:57.467464    3722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:11:57.470674    3722 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:11:57.474439    3722 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:11:57.481544    3722 start.go:297] selected driver: qemu2
	I0923 17:11:57.481550    3722 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:11:57.481558    3722 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:11:57.483928    3722 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:11:57.486493    3722 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:11:57.489558    3722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:11:57.489597    3722 cni.go:84] Creating CNI manager for ""
	I0923 17:11:57.489617    3722 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 17:11:57.489621    3722 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 17:11:57.489652    3722 start.go:340] cluster config:
	{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:11:57.493423    3722 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:11:57.496585    3722 out.go:177] * Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	I0923 17:11:57.504506    3722 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:11:57.504522    3722 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:11:57.504529    3722 cache.go:56] Caching tarball of preloaded images
	I0923 17:11:57.504595    3722 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:11:57.504602    3722 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:11:57.504797    3722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/multinode-317000/config.json ...
	I0923 17:11:57.504815    3722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/multinode-317000/config.json: {Name:mkd61f84432404998741a50da6350fec82f1c444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:11:57.505049    3722 start.go:360] acquireMachinesLock for multinode-317000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:11:57.505087    3722 start.go:364] duration metric: took 31.375µs to acquireMachinesLock for "multinode-317000"
	I0923 17:11:57.505101    3722 start.go:93] Provisioning new machine with config: &{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:11:57.505159    3722 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:11:57.513440    3722 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:11:57.531835    3722 start.go:159] libmachine.API.Create for "multinode-317000" (driver="qemu2")
	I0923 17:11:57.531859    3722 client.go:168] LocalClient.Create starting
	I0923 17:11:57.531923    3722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:11:57.531960    3722 main.go:141] libmachine: Decoding PEM data...
	I0923 17:11:57.531973    3722 main.go:141] libmachine: Parsing certificate...
	I0923 17:11:57.532010    3722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:11:57.532037    3722 main.go:141] libmachine: Decoding PEM data...
	I0923 17:11:57.532043    3722 main.go:141] libmachine: Parsing certificate...
	I0923 17:11:57.532410    3722 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:11:57.694116    3722 main.go:141] libmachine: Creating SSH key...
	I0923 17:11:57.870087    3722 main.go:141] libmachine: Creating Disk image...
	I0923 17:11:57.870093    3722 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:11:57.870299    3722 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2
	I0923 17:11:57.879736    3722 main.go:141] libmachine: STDOUT: 
	I0923 17:11:57.879757    3722 main.go:141] libmachine: STDERR: 
	I0923 17:11:57.879827    3722 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2 +20000M
	I0923 17:11:57.887705    3722 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:11:57.887721    3722 main.go:141] libmachine: STDERR: 
	I0923 17:11:57.887739    3722 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2
	I0923 17:11:57.887744    3722 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:11:57.887756    3722 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:11:57.887784    3722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:12:77:6a:b3:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2
	I0923 17:11:57.889398    3722 main.go:141] libmachine: STDOUT: 
	I0923 17:11:57.889417    3722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:11:57.889437    3722 client.go:171] duration metric: took 357.58475ms to LocalClient.Create
	I0923 17:11:59.891542    3722 start.go:128] duration metric: took 2.386437542s to createHost
	I0923 17:11:59.891603    3722 start.go:83] releasing machines lock for "multinode-317000", held for 2.386574292s
	W0923 17:11:59.891689    3722 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:11:59.903822    3722 out.go:177] * Deleting "multinode-317000" in qemu2 ...
	W0923 17:11:59.937887    3722 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:11:59.937910    3722 start.go:729] Will try again in 5 seconds ...
	I0923 17:12:04.939979    3722 start.go:360] acquireMachinesLock for multinode-317000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:12:04.940538    3722 start.go:364] duration metric: took 439.417µs to acquireMachinesLock for "multinode-317000"
	I0923 17:12:04.940678    3722 start.go:93] Provisioning new machine with config: &{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:12:04.940988    3722 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:12:04.960728    3722 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:12:05.011201    3722 start.go:159] libmachine.API.Create for "multinode-317000" (driver="qemu2")
	I0923 17:12:05.011276    3722 client.go:168] LocalClient.Create starting
	I0923 17:12:05.011386    3722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:12:05.011449    3722 main.go:141] libmachine: Decoding PEM data...
	I0923 17:12:05.011462    3722 main.go:141] libmachine: Parsing certificate...
	I0923 17:12:05.011521    3722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:12:05.011564    3722 main.go:141] libmachine: Decoding PEM data...
	I0923 17:12:05.011574    3722 main.go:141] libmachine: Parsing certificate...
	I0923 17:12:05.012240    3722 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:12:05.183616    3722 main.go:141] libmachine: Creating SSH key...
	I0923 17:12:05.256440    3722 main.go:141] libmachine: Creating Disk image...
	I0923 17:12:05.256445    3722 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:12:05.256641    3722 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2
	I0923 17:12:05.265736    3722 main.go:141] libmachine: STDOUT: 
	I0923 17:12:05.265752    3722 main.go:141] libmachine: STDERR: 
	I0923 17:12:05.265809    3722 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2 +20000M
	I0923 17:12:05.273468    3722 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:12:05.273483    3722 main.go:141] libmachine: STDERR: 
	I0923 17:12:05.273492    3722 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2
	I0923 17:12:05.273497    3722 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:12:05.273506    3722 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:12:05.273539    3722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:6d:f4:b5:97:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2
	I0923 17:12:05.275070    3722 main.go:141] libmachine: STDOUT: 
	I0923 17:12:05.275083    3722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:12:05.275096    3722 client.go:171] duration metric: took 263.823458ms to LocalClient.Create
	I0923 17:12:07.277281    3722 start.go:128] duration metric: took 2.336341959s to createHost
	I0923 17:12:07.277335    3722 start.go:83] releasing machines lock for "multinode-317000", held for 2.3368035s
	W0923 17:12:07.277636    3722 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:12:07.294276    3722 out.go:201] 
	W0923 17:12:07.298425    3722 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:12:07.298451    3722 out.go:270] * 
	* 
	W0923 17:12:07.300828    3722 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:12:07.314247    3722 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-317000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (66.573541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (99.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (126.125875ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-317000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- rollout status deployment/busybox: exit status 1 (59.283792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.922208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 17:12:07.639457    1596 retry.go:31] will retry after 524.235122ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.807625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 17:12:08.269817    1596 retry.go:31] will retry after 1.503633886s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.211792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 17:12:09.879988    1596 retry.go:31] will retry after 3.041878315s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.727208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 17:12:13.027911    1596 retry.go:31] will retry after 1.793387509s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.343292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 17:12:14.927024    1596 retry.go:31] will retry after 3.303022616s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.770417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 17:12:18.337133    1596 retry.go:31] will retry after 6.112723371s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.727875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 17:12:24.556818    1596 retry.go:31] will retry after 9.185621146s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.344708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 17:12:33.847865    1596 retry.go:31] will retry after 10.438890063s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.820667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 17:12:44.391791    1596 retry.go:31] will retry after 28.550435094s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.914125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 17:13:13.046657    1596 retry.go:31] will retry after 33.133769588s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.959125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.612292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.804959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.875667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.092917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (30.637167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (99.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-317000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.187917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (30.3005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-317000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-317000 -v 3 --alsologtostderr: exit status 83 (41.954166ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-317000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-317000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:13:46.662972    3820 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:13:46.663143    3820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:46.663146    3820 out.go:358] Setting ErrFile to fd 2...
	I0923 17:13:46.663148    3820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:46.663260    3820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:13:46.663513    3820 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:13:46.663734    3820 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:13:46.667551    3820 out.go:177] * The control-plane node multinode-317000 host is not running: state=Stopped
	I0923 17:13:46.672325    3820 out.go:177]   To start a cluster, run: "minikube start -p multinode-317000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-317000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (30.159375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-317000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-317000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.699083ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-317000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-317000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-317000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (30.191583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-317000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-317000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-317000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-317000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (30.031625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status --output json --alsologtostderr: exit status 7 (29.965291ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-317000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:13:46.870983    3832 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:13:46.871128    3832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:46.871132    3832 out.go:358] Setting ErrFile to fd 2...
	I0923 17:13:46.871134    3832 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:46.871249    3832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:13:46.871381    3832 out.go:352] Setting JSON to true
	I0923 17:13:46.871392    3832 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:13:46.871457    3832 notify.go:220] Checking for updates...
	I0923 17:13:46.871620    3832 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:13:46.871628    3832 status.go:174] checking status of multinode-317000 ...
	I0923 17:13:46.871881    3832 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:13:46.871884    3832 status.go:377] host is not running, skipping remaining checks
	I0923 17:13:46.871886    3832 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-317000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (30.0075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 node stop m03: exit status 85 (46.700916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-317000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status: exit status 7 (30.074125ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr: exit status 7 (30.167125ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:13:47.008845    3840 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:13:47.008988    3840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:47.008991    3840 out.go:358] Setting ErrFile to fd 2...
	I0923 17:13:47.008993    3840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:47.009108    3840 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:13:47.009238    3840 out.go:352] Setting JSON to false
	I0923 17:13:47.009249    3840 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:13:47.009312    3840 notify.go:220] Checking for updates...
	I0923 17:13:47.009457    3840 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:13:47.009465    3840 status.go:174] checking status of multinode-317000 ...
	I0923 17:13:47.009694    3840 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:13:47.009698    3840 status.go:377] host is not running, skipping remaining checks
	I0923 17:13:47.009700    3840 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr": multinode-317000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (30.649834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.735667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:13:47.070673    3844 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:13:47.070897    3844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:47.070900    3844 out.go:358] Setting ErrFile to fd 2...
	I0923 17:13:47.070903    3844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:47.071023    3844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:13:47.071255    3844 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:13:47.071475    3844 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:13:47.076290    3844 out.go:201] 
	W0923 17:13:47.079437    3844 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0923 17:13:47.079443    3844 out.go:270] * 
	* 
	W0923 17:13:47.081206    3844 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:13:47.084329    3844 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0923 17:13:47.070673    3844 out.go:345] Setting OutFile to fd 1 ...
I0923 17:13:47.070897    3844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 17:13:47.070900    3844 out.go:358] Setting ErrFile to fd 2...
I0923 17:13:47.070903    3844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 17:13:47.071023    3844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
I0923 17:13:47.071255    3844 mustload.go:65] Loading cluster: multinode-317000
I0923 17:13:47.071475    3844 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 17:13:47.076290    3844 out.go:201] 
W0923 17:13:47.079437    3844 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0923 17:13:47.079443    3844 out.go:270] * 
* 
W0923 17:13:47.081206    3844 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0923 17:13:47.084329    3844 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-317000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (30.372ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:13:47.117062    3846 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:13:47.117208    3846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:47.117211    3846 out.go:358] Setting ErrFile to fd 2...
	I0923 17:13:47.117214    3846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:47.117344    3846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:13:47.117470    3846 out.go:352] Setting JSON to false
	I0923 17:13:47.117481    3846 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:13:47.117549    3846 notify.go:220] Checking for updates...
	I0923 17:13:47.117688    3846 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:13:47.117697    3846 status.go:174] checking status of multinode-317000 ...
	I0923 17:13:47.117940    3846 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:13:47.117945    3846 status.go:377] host is not running, skipping remaining checks
	I0923 17:13:47.117947    3846 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 17:13:47.118749    1596 retry.go:31] will retry after 834.258336ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (73.603208ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:13:48.026685    3848 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:13:48.026915    3848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:48.026920    3848 out.go:358] Setting ErrFile to fd 2...
	I0923 17:13:48.026923    3848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:48.027124    3848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:13:48.027288    3848 out.go:352] Setting JSON to false
	I0923 17:13:48.027301    3848 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:13:48.027349    3848 notify.go:220] Checking for updates...
	I0923 17:13:48.027608    3848 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:13:48.027618    3848 status.go:174] checking status of multinode-317000 ...
	I0923 17:13:48.027931    3848 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:13:48.027936    3848 status.go:377] host is not running, skipping remaining checks
	I0923 17:13:48.027938    3848 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 17:13:48.028950    1596 retry.go:31] will retry after 2.055544871s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (74.091458ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:13:50.158680    3850 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:13:50.158869    3850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:50.158873    3850 out.go:358] Setting ErrFile to fd 2...
	I0923 17:13:50.158876    3850 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:50.159046    3850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:13:50.159206    3850 out.go:352] Setting JSON to false
	I0923 17:13:50.159220    3850 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:13:50.159257    3850 notify.go:220] Checking for updates...
	I0923 17:13:50.159501    3850 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:13:50.159511    3850 status.go:174] checking status of multinode-317000 ...
	I0923 17:13:50.159839    3850 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:13:50.159844    3850 status.go:377] host is not running, skipping remaining checks
	I0923 17:13:50.159847    3850 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 17:13:50.160958    1596 retry.go:31] will retry after 3.131730055s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (72.245458ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:13:53.365098    3852 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:13:53.365301    3852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:53.365305    3852 out.go:358] Setting ErrFile to fd 2...
	I0923 17:13:53.365309    3852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:53.365479    3852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:13:53.365644    3852 out.go:352] Setting JSON to false
	I0923 17:13:53.365657    3852 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:13:53.365690    3852 notify.go:220] Checking for updates...
	I0923 17:13:53.365917    3852 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:13:53.365927    3852 status.go:174] checking status of multinode-317000 ...
	I0923 17:13:53.366225    3852 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:13:53.366230    3852 status.go:377] host is not running, skipping remaining checks
	I0923 17:13:53.366233    3852 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 17:13:53.367261    1596 retry.go:31] will retry after 2.583622087s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (74.068375ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:13:56.025028    3854 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:13:56.025221    3854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:56.025225    3854 out.go:358] Setting ErrFile to fd 2...
	I0923 17:13:56.025228    3854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:13:56.025427    3854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:13:56.025596    3854 out.go:352] Setting JSON to false
	I0923 17:13:56.025610    3854 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:13:56.025641    3854 notify.go:220] Checking for updates...
	I0923 17:13:56.025867    3854 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:13:56.025880    3854 status.go:174] checking status of multinode-317000 ...
	I0923 17:13:56.026190    3854 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:13:56.026195    3854 status.go:377] host is not running, skipping remaining checks
	I0923 17:13:56.026198    3854 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 17:13:56.027282    1596 retry.go:31] will retry after 5.645503274s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (73.796333ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:14:01.746541    3856 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:14:01.746745    3856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:01.746750    3856 out.go:358] Setting ErrFile to fd 2...
	I0923 17:14:01.746753    3856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:01.746930    3856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:14:01.747081    3856 out.go:352] Setting JSON to false
	I0923 17:14:01.747094    3856 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:14:01.747122    3856 notify.go:220] Checking for updates...
	I0923 17:14:01.747366    3856 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:14:01.747381    3856 status.go:174] checking status of multinode-317000 ...
	I0923 17:14:01.747719    3856 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:14:01.747724    3856 status.go:377] host is not running, skipping remaining checks
	I0923 17:14:01.747727    3856 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 17:14:01.748894    1596 retry.go:31] will retry after 5.822092085s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (72.919625ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:14:07.643885    3858 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:14:07.644054    3858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:07.644058    3858 out.go:358] Setting ErrFile to fd 2...
	I0923 17:14:07.644061    3858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:07.644247    3858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:14:07.644407    3858 out.go:352] Setting JSON to false
	I0923 17:14:07.644420    3858 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:14:07.644488    3858 notify.go:220] Checking for updates...
	I0923 17:14:07.644702    3858 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:14:07.644712    3858 status.go:174] checking status of multinode-317000 ...
	I0923 17:14:07.645025    3858 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:14:07.645030    3858 status.go:377] host is not running, skipping remaining checks
	I0923 17:14:07.645033    3858 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 17:14:07.646116    1596 retry.go:31] will retry after 16.851381992s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr: exit status 7 (73.273917ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:14:24.570495    3863 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:14:24.570695    3863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:24.570699    3863 out.go:358] Setting ErrFile to fd 2...
	I0923 17:14:24.570702    3863 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:24.570879    3863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:14:24.571043    3863 out.go:352] Setting JSON to false
	I0923 17:14:24.571056    3863 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:14:24.571100    3863 notify.go:220] Checking for updates...
	I0923 17:14:24.571343    3863 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:14:24.571355    3863 status.go:174] checking status of multinode-317000 ...
	I0923 17:14:24.571657    3863 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:14:24.571662    3863 status.go:377] host is not running, skipping remaining checks
	I0923 17:14:24.571665    3863 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-317000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (33.626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (37.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-317000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-317000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-317000: (3.507158708s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-317000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-317000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.226131959s)

                                                
                                                
-- stdout --
	* [multinode-317000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	* Restarting existing qemu2 VM for "multinode-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:14:28.207106    3887 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:14:28.207268    3887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:28.207273    3887 out.go:358] Setting ErrFile to fd 2...
	I0923 17:14:28.207276    3887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:28.207447    3887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:14:28.208722    3887 out.go:352] Setting JSON to false
	I0923 17:14:28.227971    3887 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2631,"bootTime":1727134237,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:14:28.228042    3887 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:14:28.232731    3887 out.go:177] * [multinode-317000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:14:28.239730    3887 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:14:28.239808    3887 notify.go:220] Checking for updates...
	I0923 17:14:28.246648    3887 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:14:28.249698    3887 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:14:28.252583    3887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:14:28.255703    3887 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:14:28.258692    3887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:14:28.262049    3887 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:14:28.262115    3887 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:14:28.266663    3887 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:14:28.277589    3887 start.go:297] selected driver: qemu2
	I0923 17:14:28.277595    3887 start.go:901] validating driver "qemu2" against &{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:14:28.277664    3887 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:14:28.280068    3887 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:14:28.280094    3887 cni.go:84] Creating CNI manager for ""
	I0923 17:14:28.280123    3887 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 17:14:28.280162    3887 start.go:340] cluster config:
	{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-317000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:14:28.283931    3887 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:14:28.290654    3887 out.go:177] * Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	I0923 17:14:28.294646    3887 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:14:28.294668    3887 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:14:28.294676    3887 cache.go:56] Caching tarball of preloaded images
	I0923 17:14:28.294768    3887 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:14:28.294774    3887 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:14:28.294834    3887 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/multinode-317000/config.json ...
	I0923 17:14:28.295219    3887 start.go:360] acquireMachinesLock for multinode-317000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:14:28.295259    3887 start.go:364] duration metric: took 32.25µs to acquireMachinesLock for "multinode-317000"
	I0923 17:14:28.295271    3887 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:14:28.295274    3887 fix.go:54] fixHost starting: 
	I0923 17:14:28.295418    3887 fix.go:112] recreateIfNeeded on multinode-317000: state=Stopped err=<nil>
	W0923 17:14:28.295427    3887 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:14:28.299681    3887 out.go:177] * Restarting existing qemu2 VM for "multinode-317000" ...
	I0923 17:14:28.307682    3887 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:14:28.307722    3887 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:6d:f4:b5:97:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2
	I0923 17:14:28.310030    3887 main.go:141] libmachine: STDOUT: 
	I0923 17:14:28.310053    3887 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:14:28.310097    3887 fix.go:56] duration metric: took 14.819459ms for fixHost
	I0923 17:14:28.310103    3887 start.go:83] releasing machines lock for "multinode-317000", held for 14.83925ms
	W0923 17:14:28.310111    3887 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:14:28.310151    3887 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:14:28.310157    3887 start.go:729] Will try again in 5 seconds ...
	I0923 17:14:33.312129    3887 start.go:360] acquireMachinesLock for multinode-317000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:14:33.312622    3887 start.go:364] duration metric: took 390.834µs to acquireMachinesLock for "multinode-317000"
	I0923 17:14:33.312776    3887 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:14:33.312800    3887 fix.go:54] fixHost starting: 
	I0923 17:14:33.313497    3887 fix.go:112] recreateIfNeeded on multinode-317000: state=Stopped err=<nil>
	W0923 17:14:33.313523    3887 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:14:33.317877    3887 out.go:177] * Restarting existing qemu2 VM for "multinode-317000" ...
	I0923 17:14:33.325886    3887 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:14:33.326110    3887 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:6d:f4:b5:97:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2
	I0923 17:14:33.334759    3887 main.go:141] libmachine: STDOUT: 
	I0923 17:14:33.334844    3887 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:14:33.334927    3887 fix.go:56] duration metric: took 22.129834ms for fixHost
	I0923 17:14:33.334947    3887 start.go:83] releasing machines lock for "multinode-317000", held for 22.291541ms
	W0923 17:14:33.335170    3887 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:14:33.342849    3887 out.go:201] 
	W0923 17:14:33.346943    3887 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:14:33.347057    3887 out.go:270] * 
	* 
	W0923 17:14:33.349885    3887 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:14:33.356841    3887 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-317000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-317000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (32.761541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 node delete m03: exit status 83 (39.642417ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-317000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-317000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-317000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr: exit status 7 (30.230292ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:14:33.541181    3901 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:14:33.541325    3901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:33.541328    3901 out.go:358] Setting ErrFile to fd 2...
	I0923 17:14:33.541331    3901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:33.541451    3901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:14:33.541600    3901 out.go:352] Setting JSON to false
	I0923 17:14:33.541610    3901 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:14:33.541673    3901 notify.go:220] Checking for updates...
	I0923 17:14:33.541818    3901 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:14:33.541827    3901 status.go:174] checking status of multinode-317000 ...
	I0923 17:14:33.542048    3901 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:14:33.542052    3901 status.go:377] host is not running, skipping remaining checks
	I0923 17:14:33.542054    3901 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (30.196583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-317000 stop: (3.046647833s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status: exit status 7 (66.788666ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr: exit status 7 (33.269209ms)

                                                
                                                
-- stdout --
	multinode-317000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:14:36.718531    3925 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:14:36.718683    3925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:36.718687    3925 out.go:358] Setting ErrFile to fd 2...
	I0923 17:14:36.718689    3925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:36.718833    3925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:14:36.718957    3925 out.go:352] Setting JSON to false
	I0923 17:14:36.718968    3925 mustload.go:65] Loading cluster: multinode-317000
	I0923 17:14:36.719020    3925 notify.go:220] Checking for updates...
	I0923 17:14:36.719188    3925 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:14:36.719200    3925 status.go:174] checking status of multinode-317000 ...
	I0923 17:14:36.719441    3925 status.go:364] multinode-317000 host status = "Stopped" (err=<nil>)
	I0923 17:14:36.719444    3925 status.go:377] host is not running, skipping remaining checks
	I0923 17:14:36.719446    3925 status.go:176] multinode-317000 status: &{Name:multinode-317000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr": multinode-317000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-317000 status --alsologtostderr": multinode-317000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (30.581209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-317000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-317000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.179731167s)

                                                
                                                
-- stdout --
	* [multinode-317000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	* Restarting existing qemu2 VM for "multinode-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-317000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:14:36.778800    3929 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:14:36.778924    3929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:36.778928    3929 out.go:358] Setting ErrFile to fd 2...
	I0923 17:14:36.778930    3929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:14:36.779066    3929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:14:36.780071    3929 out.go:352] Setting JSON to false
	I0923 17:14:36.795969    3929 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2639,"bootTime":1727134237,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:14:36.796045    3929 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:14:36.800924    3929 out.go:177] * [multinode-317000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:14:36.807880    3929 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:14:36.807925    3929 notify.go:220] Checking for updates...
	I0923 17:14:36.813772    3929 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:14:36.816844    3929 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:14:36.818262    3929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:14:36.821748    3929 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:14:36.824836    3929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:14:36.828150    3929 config.go:182] Loaded profile config "multinode-317000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:14:36.828412    3929 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:14:36.832847    3929 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:14:36.839951    3929 start.go:297] selected driver: qemu2
	I0923 17:14:36.839958    3929 start.go:901] validating driver "qemu2" against &{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-317000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:14:36.840023    3929 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:14:36.842182    3929 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:14:36.842207    3929 cni.go:84] Creating CNI manager for ""
	I0923 17:14:36.842231    3929 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 17:14:36.842273    3929 start.go:340] cluster config:
	{Name:multinode-317000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-317000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:14:36.845620    3929 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:14:36.852834    3929 out.go:177] * Starting "multinode-317000" primary control-plane node in "multinode-317000" cluster
	I0923 17:14:36.856771    3929 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:14:36.856784    3929 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:14:36.856789    3929 cache.go:56] Caching tarball of preloaded images
	I0923 17:14:36.856832    3929 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:14:36.856837    3929 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:14:36.856883    3929 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/multinode-317000/config.json ...
	I0923 17:14:36.857345    3929 start.go:360] acquireMachinesLock for multinode-317000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:14:36.857371    3929 start.go:364] duration metric: took 20.833µs to acquireMachinesLock for "multinode-317000"
	I0923 17:14:36.857380    3929 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:14:36.857385    3929 fix.go:54] fixHost starting: 
	I0923 17:14:36.857500    3929 fix.go:112] recreateIfNeeded on multinode-317000: state=Stopped err=<nil>
	W0923 17:14:36.857511    3929 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:14:36.864761    3929 out.go:177] * Restarting existing qemu2 VM for "multinode-317000" ...
	I0923 17:14:36.868806    3929 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:14:36.868844    3929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:6d:f4:b5:97:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2
	I0923 17:14:36.870793    3929 main.go:141] libmachine: STDOUT: 
	I0923 17:14:36.870810    3929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:14:36.870840    3929 fix.go:56] duration metric: took 13.453958ms for fixHost
	I0923 17:14:36.870845    3929 start.go:83] releasing machines lock for "multinode-317000", held for 13.470333ms
	W0923 17:14:36.870852    3929 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:14:36.870882    3929 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:14:36.870887    3929 start.go:729] Will try again in 5 seconds ...
	I0923 17:14:41.872890    3929 start.go:360] acquireMachinesLock for multinode-317000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:14:41.873279    3929 start.go:364] duration metric: took 317.583µs to acquireMachinesLock for "multinode-317000"
	I0923 17:14:41.873423    3929 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:14:41.873440    3929 fix.go:54] fixHost starting: 
	I0923 17:14:41.874091    3929 fix.go:112] recreateIfNeeded on multinode-317000: state=Stopped err=<nil>
	W0923 17:14:41.874114    3929 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:14:41.878568    3929 out.go:177] * Restarting existing qemu2 VM for "multinode-317000" ...
	I0923 17:14:41.886554    3929 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:14:41.886686    3929 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:6d:f4:b5:97:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/multinode-317000/disk.qcow2
	I0923 17:14:41.893903    3929 main.go:141] libmachine: STDOUT: 
	I0923 17:14:41.893976    3929 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:14:41.894045    3929 fix.go:56] duration metric: took 20.606375ms for fixHost
	I0923 17:14:41.894066    3929 start.go:83] releasing machines lock for "multinode-317000", held for 20.765459ms
	W0923 17:14:41.894278    3929 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-317000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:14:41.902320    3929 out.go:201] 
	W0923 17:14:41.906565    3929 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:14:41.906588    3929 out.go:270] * 
	* 
	W0923 17:14:41.908748    3929 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:14:41.917509    3929 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-317000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (67.658916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-317000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-317000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-317000-m01 --driver=qemu2 : exit status 80 (10.073402875s)

                                                
                                                
-- stdout --
	* [multinode-317000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-317000-m01" primary control-plane node in "multinode-317000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-317000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-317000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-317000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-317000-m02 --driver=qemu2 : exit status 80 (9.975491583s)

                                                
                                                
-- stdout --
	* [multinode-317000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-317000-m02" primary control-plane node in "multinode-317000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-317000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-317000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-317000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-317000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-317000: exit status 83 (83.230958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-317000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-317000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-317000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-317000 -n multinode-317000: exit status 7 (30.328458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-317000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.28s)

                                                
                                    
x
+
TestPreload (10.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-853000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-853000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.984807417s)

                                                
                                                
-- stdout --
	* [test-preload-853000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-853000" primary control-plane node in "test-preload-853000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-853000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:15:02.428168    3987 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:15:02.428289    3987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:15:02.428292    3987 out.go:358] Setting ErrFile to fd 2...
	I0923 17:15:02.428294    3987 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:15:02.428423    3987 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:15:02.429517    3987 out.go:352] Setting JSON to false
	I0923 17:15:02.445610    3987 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2665,"bootTime":1727134237,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:15:02.445672    3987 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:15:02.450854    3987 out.go:177] * [test-preload-853000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:15:02.459831    3987 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:15:02.459955    3987 notify.go:220] Checking for updates...
	I0923 17:15:02.467825    3987 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:15:02.470783    3987 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:15:02.473830    3987 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:15:02.476759    3987 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:15:02.479778    3987 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:15:02.481616    3987 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:15:02.481670    3987 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:15:02.485731    3987 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:15:02.492628    3987 start.go:297] selected driver: qemu2
	I0923 17:15:02.492635    3987 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:15:02.492641    3987 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:15:02.494815    3987 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:15:02.497752    3987 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:15:02.500896    3987 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:15:02.500929    3987 cni.go:84] Creating CNI manager for ""
	I0923 17:15:02.500953    3987 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:15:02.500958    3987 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:15:02.500989    3987 start.go:340] cluster config:
	{Name:test-preload-853000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:15:02.504468    3987 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:02.511767    3987 out.go:177] * Starting "test-preload-853000" primary control-plane node in "test-preload-853000" cluster
	I0923 17:15:02.515811    3987 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0923 17:15:02.515913    3987 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/test-preload-853000/config.json ...
	I0923 17:15:02.515917    3987 cache.go:107] acquiring lock: {Name:mk164bf50ef3aab314d6f3e22955f2211bcd6f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:02.515940    3987 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/test-preload-853000/config.json: {Name:mk4f4981ada3c68fbc4148d9dd1e8a35ceb9f06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:15:02.515935    3987 cache.go:107] acquiring lock: {Name:mkf02b4c7425a272c3c85e11ad79916194d3b562 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:02.515949    3987 cache.go:107] acquiring lock: {Name:mkd7678e8ebeb2e8e180479e68b9a2a0ef09b896 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:02.515965    3987 cache.go:107] acquiring lock: {Name:mk9c3f2b4d38925419ed74fb193bde542ea76dd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:02.515919    3987 cache.go:107] acquiring lock: {Name:mkd7e231fe1764ba47397126a95818f5a26960b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:02.516113    3987 cache.go:107] acquiring lock: {Name:mk4a6d4db82f0200760dbb27cc7ebd3e71d864c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:02.516173    3987 cache.go:107] acquiring lock: {Name:mkc913dd88712ef4b311b11d5ebda28cbe3665f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:02.516331    3987 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 17:15:02.516335    3987 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 17:15:02.516411    3987 start.go:360] acquireMachinesLock for test-preload-853000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:15:02.516400    3987 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 17:15:02.516444    3987 cache.go:107] acquiring lock: {Name:mk546f2a9d0d4a576bc2d9660ec86b7f507edd4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:15:02.516478    3987 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 17:15:02.516504    3987 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 17:15:02.516521    3987 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:15:02.516566    3987 start.go:364] duration metric: took 130.625µs to acquireMachinesLock for "test-preload-853000"
	I0923 17:15:02.516605    3987 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:15:02.516621    3987 start.go:93] Provisioning new machine with config: &{Name:test-preload-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:15:02.516649    3987 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:15:02.516438    3987 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 17:15:02.523750    3987 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:15:02.527530    3987 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 17:15:02.528667    3987 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:15:02.528787    3987 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 17:15:02.528769    3987 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 17:15:02.530405    3987 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 17:15:02.530439    3987 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 17:15:02.530546    3987 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:15:02.530568    3987 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 17:15:02.542646    3987 start.go:159] libmachine.API.Create for "test-preload-853000" (driver="qemu2")
	I0923 17:15:02.542669    3987 client.go:168] LocalClient.Create starting
	I0923 17:15:02.542761    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:15:02.542797    3987 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:02.542814    3987 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:02.542856    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:15:02.542881    3987 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:02.542891    3987 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:02.543248    3987 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:15:02.705778    3987 main.go:141] libmachine: Creating SSH key...
	I0923 17:15:02.816412    3987 main.go:141] libmachine: Creating Disk image...
	I0923 17:15:02.816433    3987 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:15:02.816652    3987 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2
	I0923 17:15:02.826541    3987 main.go:141] libmachine: STDOUT: 
	I0923 17:15:02.826563    3987 main.go:141] libmachine: STDERR: 
	I0923 17:15:02.826615    3987 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2 +20000M
	I0923 17:15:02.835577    3987 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:15:02.835601    3987 main.go:141] libmachine: STDERR: 
	I0923 17:15:02.835621    3987 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2
	I0923 17:15:02.835624    3987 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:15:02.835645    3987 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:15:02.835673    3987 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:98:d1:28:dd:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2
	I0923 17:15:02.837743    3987 main.go:141] libmachine: STDOUT: 
	I0923 17:15:02.837759    3987 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:15:02.837777    3987 client.go:171] duration metric: took 295.112416ms to LocalClient.Create
	I0923 17:15:02.980727    3987 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0923 17:15:03.002644    3987 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0923 17:15:03.011463    3987 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0923 17:15:03.039656    3987 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0923 17:15:03.066755    3987 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0923 17:15:03.108465    3987 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0923 17:15:03.111102    3987 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0923 17:15:03.111122    3987 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 594.983041ms
	I0923 17:15:03.111146    3987 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0923 17:15:03.112571    3987 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0923 17:15:03.112617    3987 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W0923 17:15:03.440793    3987 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0923 17:15:03.440897    3987 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0923 17:15:04.399698    3987 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0923 17:15:04.399764    3987 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.883902708s
	I0923 17:15:04.399787    3987 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0923 17:15:04.515592    3987 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0923 17:15:04.515637    3987 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.999252667s
	I0923 17:15:04.515698    3987 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0923 17:15:04.837981    3987 start.go:128] duration metric: took 2.321383125s to createHost
	I0923 17:15:04.838030    3987 start.go:83] releasing machines lock for "test-preload-853000", held for 2.321526s
	W0923 17:15:04.838086    3987 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:04.860468    3987 out.go:177] * Deleting "test-preload-853000" in qemu2 ...
	W0923 17:15:04.899323    3987 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:04.899350    3987 start.go:729] Will try again in 5 seconds ...
	I0923 17:15:05.981877    3987 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0923 17:15:05.981929    3987 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.466087417s
	I0923 17:15:05.981981    3987 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0923 17:15:07.380676    3987 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0923 17:15:07.380744    3987 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.86498725s
	I0923 17:15:07.380772    3987 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0923 17:15:07.451443    3987 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0923 17:15:07.451484    3987 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.935725s
	I0923 17:15:07.451507    3987 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0923 17:15:08.942204    3987 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0923 17:15:08.942252    3987 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.426490958s
	I0923 17:15:08.942288    3987 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0923 17:15:09.899352    3987 start.go:360] acquireMachinesLock for test-preload-853000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:15:09.899774    3987 start.go:364] duration metric: took 353.875µs to acquireMachinesLock for "test-preload-853000"
	I0923 17:15:09.899882    3987 start.go:93] Provisioning new machine with config: &{Name:test-preload-853000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-853000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:15:09.900113    3987 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:15:09.924734    3987 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:15:09.975764    3987 start.go:159] libmachine.API.Create for "test-preload-853000" (driver="qemu2")
	I0923 17:15:09.975804    3987 client.go:168] LocalClient.Create starting
	I0923 17:15:09.975921    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:15:09.975991    3987 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:09.976016    3987 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:09.976080    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:15:09.976125    3987 main.go:141] libmachine: Decoding PEM data...
	I0923 17:15:09.976143    3987 main.go:141] libmachine: Parsing certificate...
	I0923 17:15:09.976669    3987 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:15:10.148932    3987 main.go:141] libmachine: Creating SSH key...
	I0923 17:15:10.321781    3987 main.go:141] libmachine: Creating Disk image...
	I0923 17:15:10.321789    3987 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:15:10.322030    3987 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2
	I0923 17:15:10.331632    3987 main.go:141] libmachine: STDOUT: 
	I0923 17:15:10.331654    3987 main.go:141] libmachine: STDERR: 
	I0923 17:15:10.331714    3987 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2 +20000M
	I0923 17:15:10.339889    3987 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:15:10.339912    3987 main.go:141] libmachine: STDERR: 
	I0923 17:15:10.339924    3987 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2
	I0923 17:15:10.339930    3987 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:15:10.339939    3987 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:15:10.339980    3987 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:97:97:23:64:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/test-preload-853000/disk.qcow2
	I0923 17:15:10.341711    3987 main.go:141] libmachine: STDOUT: 
	I0923 17:15:10.341727    3987 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:15:10.341740    3987 client.go:171] duration metric: took 365.941333ms to LocalClient.Create
	I0923 17:15:11.692034    3987 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0923 17:15:11.692098    3987 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.176296292s
	I0923 17:15:11.692129    3987 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0923 17:15:11.692184    3987 cache.go:87] Successfully saved all images to host disk.
	I0923 17:15:12.343889    3987 start.go:128] duration metric: took 2.443826584s to createHost
	I0923 17:15:12.343938    3987 start.go:83] releasing machines lock for "test-preload-853000", held for 2.444217333s
	W0923 17:15:12.344214    3987 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-853000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-853000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:15:12.353705    3987 out.go:201] 
	W0923 17:15:12.357816    3987 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:15:12.357842    3987 out.go:270] * 
	* 
	W0923 17:15:12.360559    3987 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:15:12.369556    3987 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-853000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-23 17:15:12.387749 -0700 PDT m=+2306.418204543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-853000 -n test-preload-853000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-853000 -n test-preload-853000: exit status 7 (70.456875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-853000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-853000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-853000
--- FAIL: TestPreload (10.14s)

                                                
                                    
x
+
TestScheduledStopUnix (9.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-114000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-114000 --memory=2048 --driver=qemu2 : exit status 80 (9.831195542s)

                                                
                                                
-- stdout --
	* [scheduled-stop-114000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-114000" primary control-plane node in "scheduled-stop-114000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-114000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-114000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-114000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-114000" primary control-plane node in "scheduled-stop-114000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-114000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-114000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-23 17:15:22.372836 -0700 PDT m=+2316.403616960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-114000 -n scheduled-stop-114000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-114000 -n scheduled-stop-114000: exit status 7 (68.624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-114000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-114000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-114000
--- FAIL: TestScheduledStopUnix (9.99s)

                                                
                                    
x
+
TestSkaffold (12.45s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2373652429 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2373652429 version: (1.063756542s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-537000 --memory=2600 --driver=qemu2 
E0923 17:15:32.781419    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-537000 --memory=2600 --driver=qemu2 : exit status 80 (10.011087083s)

                                                
                                                
-- stdout --
	* [skaffold-537000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-537000" primary control-plane node in "skaffold-537000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-537000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-537000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-537000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-537000" primary control-plane node in "skaffold-537000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-537000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-537000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-23 17:15:34.832128 -0700 PDT m=+2328.863314585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-537000 -n skaffold-537000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-537000 -n skaffold-537000: exit status 7 (59.636375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-537000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-537000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-537000
--- FAIL: TestSkaffold (12.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (603.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2713300613 start -p running-upgrade-903000 --memory=2200 --vm-driver=qemu2 
E0923 17:16:55.870531    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2713300613 start -p running-upgrade-903000 --memory=2200 --vm-driver=qemu2 : (55.099010042s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-903000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-903000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m34.052858833s)

                                                
                                                
-- stdout --
	* [running-upgrade-903000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-903000" primary control-plane node in "running-upgrade-903000" cluster
	* Updating the running qemu2 "running-upgrade-903000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:17:12.148147    4371 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:17:12.148296    4371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:17:12.148301    4371 out.go:358] Setting ErrFile to fd 2...
	I0923 17:17:12.148303    4371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:17:12.148439    4371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:17:12.149755    4371 out.go:352] Setting JSON to false
	I0923 17:17:12.166372    4371 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2795,"bootTime":1727134237,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:17:12.166468    4371 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:17:12.171527    4371 out.go:177] * [running-upgrade-903000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:17:12.180393    4371 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:17:12.180436    4371 notify.go:220] Checking for updates...
	I0923 17:17:12.189333    4371 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:17:12.192396    4371 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:17:12.195347    4371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:17:12.198355    4371 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:17:12.199521    4371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:17:12.202575    4371 config.go:182] Loaded profile config "running-upgrade-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:17:12.206374    4371 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 17:17:12.209374    4371 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:17:12.213297    4371 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:17:12.220360    4371 start.go:297] selected driver: qemu2
	I0923 17:17:12.220365    4371 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 17:17:12.220408    4371 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:17:12.222791    4371 cni.go:84] Creating CNI manager for ""
	I0923 17:17:12.222818    4371 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:17:12.222838    4371 start.go:340] cluster config:
	{Name:running-upgrade-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 17:17:12.222887    4371 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:17:12.230323    4371 out.go:177] * Starting "running-upgrade-903000" primary control-plane node in "running-upgrade-903000" cluster
	I0923 17:17:12.234246    4371 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 17:17:12.234261    4371 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0923 17:17:12.234266    4371 cache.go:56] Caching tarball of preloaded images
	I0923 17:17:12.234327    4371 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:17:12.234332    4371 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0923 17:17:12.234380    4371 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/config.json ...
	I0923 17:17:12.234767    4371 start.go:360] acquireMachinesLock for running-upgrade-903000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:17:12.234799    4371 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "running-upgrade-903000"
	I0923 17:17:12.234808    4371 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:17:12.234813    4371 fix.go:54] fixHost starting: 
	I0923 17:17:12.235390    4371 fix.go:112] recreateIfNeeded on running-upgrade-903000: state=Running err=<nil>
	W0923 17:17:12.235398    4371 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:17:12.243275    4371 out.go:177] * Updating the running qemu2 "running-upgrade-903000" VM ...
	I0923 17:17:12.247317    4371 machine.go:93] provisionDockerMachine start ...
	I0923 17:17:12.247359    4371 main.go:141] libmachine: Using SSH client type: native
	I0923 17:17:12.247471    4371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10538dc00] 0x105390440 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0923 17:17:12.247476    4371 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 17:17:12.304533    4371 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-903000
	
	I0923 17:17:12.304552    4371 buildroot.go:166] provisioning hostname "running-upgrade-903000"
	I0923 17:17:12.304602    4371 main.go:141] libmachine: Using SSH client type: native
	I0923 17:17:12.304719    4371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10538dc00] 0x105390440 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0923 17:17:12.304728    4371 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-903000 && echo "running-upgrade-903000" | sudo tee /etc/hostname
	I0923 17:17:12.360435    4371 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-903000
	
	I0923 17:17:12.360496    4371 main.go:141] libmachine: Using SSH client type: native
	I0923 17:17:12.360615    4371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10538dc00] 0x105390440 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0923 17:17:12.360623    4371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-903000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-903000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-903000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 17:17:12.412834    4371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 17:17:12.412845    4371 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19696-1109/.minikube CaCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19696-1109/.minikube}
	I0923 17:17:12.412856    4371 buildroot.go:174] setting up certificates
	I0923 17:17:12.412861    4371 provision.go:84] configureAuth start
	I0923 17:17:12.412865    4371 provision.go:143] copyHostCerts
	I0923 17:17:12.412949    4371 exec_runner.go:144] found /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.pem, removing ...
	I0923 17:17:12.412955    4371 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.pem
	I0923 17:17:12.413086    4371 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.pem (1082 bytes)
	I0923 17:17:12.413269    4371 exec_runner.go:144] found /Users/jenkins/minikube-integration/19696-1109/.minikube/cert.pem, removing ...
	I0923 17:17:12.413272    4371 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19696-1109/.minikube/cert.pem
	I0923 17:17:12.413319    4371 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/cert.pem (1123 bytes)
	I0923 17:17:12.413425    4371 exec_runner.go:144] found /Users/jenkins/minikube-integration/19696-1109/.minikube/key.pem, removing ...
	I0923 17:17:12.413429    4371 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19696-1109/.minikube/key.pem
	I0923 17:17:12.413473    4371 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/key.pem (1679 bytes)
	I0923 17:17:12.413557    4371 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-903000 san=[127.0.0.1 localhost minikube running-upgrade-903000]
	I0923 17:17:12.519552    4371 provision.go:177] copyRemoteCerts
	I0923 17:17:12.519600    4371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 17:17:12.519609    4371 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 17:17:12.550966    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 17:17:12.557618    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0923 17:17:12.564750    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 17:17:12.571049    4371 provision.go:87] duration metric: took 158.180083ms to configureAuth
	I0923 17:17:12.571060    4371 buildroot.go:189] setting minikube options for container-runtime
	I0923 17:17:12.571166    4371 config.go:182] Loaded profile config "running-upgrade-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:17:12.571202    4371 main.go:141] libmachine: Using SSH client type: native
	I0923 17:17:12.571297    4371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10538dc00] 0x105390440 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0923 17:17:12.571302    4371 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 17:17:12.626216    4371 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 17:17:12.626229    4371 buildroot.go:70] root file system type: tmpfs
	I0923 17:17:12.626286    4371 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 17:17:12.626347    4371 main.go:141] libmachine: Using SSH client type: native
	I0923 17:17:12.626465    4371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10538dc00] 0x105390440 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0923 17:17:12.626498    4371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 17:17:12.683072    4371 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 17:17:12.683136    4371 main.go:141] libmachine: Using SSH client type: native
	I0923 17:17:12.683251    4371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10538dc00] 0x105390440 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0923 17:17:12.683259    4371 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 17:17:12.735942    4371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 17:17:12.735954    4371 machine.go:96] duration metric: took 488.645667ms to provisionDockerMachine
	I0923 17:17:12.735960    4371 start.go:293] postStartSetup for "running-upgrade-903000" (driver="qemu2")
	I0923 17:17:12.735966    4371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 17:17:12.736024    4371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 17:17:12.736033    4371 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 17:17:12.766552    4371 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 17:17:12.768000    4371 info.go:137] Remote host: Buildroot 2021.02.12
	I0923 17:17:12.768007    4371 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19696-1109/.minikube/addons for local assets ...
	I0923 17:17:12.768088    4371 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19696-1109/.minikube/files for local assets ...
	I0923 17:17:12.768218    4371 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem -> 15962.pem in /etc/ssl/certs
	I0923 17:17:12.768355    4371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 17:17:12.771045    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem --> /etc/ssl/certs/15962.pem (1708 bytes)
	I0923 17:17:12.782922    4371 start.go:296] duration metric: took 46.956292ms for postStartSetup
	I0923 17:17:12.782944    4371 fix.go:56] duration metric: took 548.149625ms for fixHost
	I0923 17:17:12.783014    4371 main.go:141] libmachine: Using SSH client type: native
	I0923 17:17:12.783134    4371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10538dc00] 0x105390440 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0923 17:17:12.783139    4371 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 17:17:12.833769    4371 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727137033.327484930
	
	I0923 17:17:12.833778    4371 fix.go:216] guest clock: 1727137033.327484930
	I0923 17:17:12.833781    4371 fix.go:229] Guest: 2024-09-23 17:17:13.32748493 -0700 PDT Remote: 2024-09-23 17:17:12.782945 -0700 PDT m=+0.655220751 (delta=544.53993ms)
	I0923 17:17:12.833793    4371 fix.go:200] guest clock delta is within tolerance: 544.53993ms
	I0923 17:17:12.833796    4371 start.go:83] releasing machines lock for "running-upgrade-903000", held for 599.012334ms
	I0923 17:17:12.833871    4371 ssh_runner.go:195] Run: cat /version.json
	I0923 17:17:12.833881    4371 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 17:17:12.833871    4371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 17:17:12.833923    4371 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	W0923 17:17:12.834639    4371 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50249: connect: connection refused
	I0923 17:17:12.834656    4371 retry.go:31] will retry after 242.099608ms: dial tcp [::1]:50249: connect: connection refused
	W0923 17:17:13.106345    4371 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0923 17:17:13.106427    4371 ssh_runner.go:195] Run: systemctl --version
	I0923 17:17:13.108201    4371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 17:17:13.109864    4371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 17:17:13.109896    4371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0923 17:17:13.112597    4371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0923 17:17:13.116971    4371 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 17:17:13.116978    4371 start.go:495] detecting cgroup driver to use...
	I0923 17:17:13.117048    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 17:17:13.122353    4371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0923 17:17:13.125554    4371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 17:17:13.128398    4371 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 17:17:13.128427    4371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 17:17:13.131451    4371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 17:17:13.134991    4371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 17:17:13.138164    4371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 17:17:13.141212    4371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 17:17:13.143975    4371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 17:17:13.146880    4371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 17:17:13.151143    4371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 17:17:13.154286    4371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 17:17:13.157070    4371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 17:17:13.159745    4371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:17:13.258978    4371 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 17:17:13.270959    4371 start.go:495] detecting cgroup driver to use...
	I0923 17:17:13.271035    4371 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 17:17:13.280005    4371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 17:17:13.285528    4371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 17:17:13.294130    4371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 17:17:13.300405    4371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 17:17:13.304852    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 17:17:13.309770    4371 ssh_runner.go:195] Run: which cri-dockerd
	I0923 17:17:13.311133    4371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 17:17:13.314314    4371 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0923 17:17:13.319518    4371 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 17:17:13.415396    4371 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 17:17:13.515244    4371 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 17:17:13.515313    4371 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 17:17:13.520897    4371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:17:13.611906    4371 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 17:17:26.437974    4371 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.757528417s)
	I0923 17:17:26.438047    4371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 17:17:26.442468    4371 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0923 17:17:26.449604    4371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 17:17:26.454170    4371 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 17:17:26.541609    4371 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 17:17:26.611914    4371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:17:26.694750    4371 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 17:17:26.702235    4371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 17:17:26.707292    4371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:17:26.774251    4371 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 17:17:26.812584    4371 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 17:17:26.812686    4371 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 17:17:26.814644    4371 start.go:563] Will wait 60s for crictl version
	I0923 17:17:26.814691    4371 ssh_runner.go:195] Run: which crictl
	I0923 17:17:26.817338    4371 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 17:17:26.829391    4371 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0923 17:17:26.829479    4371 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 17:17:26.842168    4371 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 17:17:26.858566    4371 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0923 17:17:26.858655    4371 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0923 17:17:26.860001    4371 kubeadm.go:883] updating cluster {Name:running-upgrade-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0923 17:17:26.860052    4371 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 17:17:26.860101    4371 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 17:17:26.870744    4371 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 17:17:26.870760    4371 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 17:17:26.870816    4371 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 17:17:26.874198    4371 ssh_runner.go:195] Run: which lz4
	I0923 17:17:26.875401    4371 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 17:17:26.876501    4371 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 17:17:26.876511    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0923 17:17:27.816985    4371 docker.go:649] duration metric: took 941.626458ms to copy over tarball
	I0923 17:17:27.817052    4371 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 17:17:29.081307    4371 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.264237375s)
	I0923 17:17:29.081328    4371 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 17:17:29.097297    4371 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 17:17:29.100205    4371 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0923 17:17:29.104938    4371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:17:29.186312    4371 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 17:17:30.456439    4371 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.270119333s)
	I0923 17:17:30.456551    4371 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 17:17:30.467402    4371 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 17:17:30.467412    4371 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 17:17:30.467417    4371 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 17:17:30.471546    4371 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:17:30.473837    4371 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:17:30.475677    4371 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:17:30.475942    4371 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:17:30.477248    4371 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:17:30.477492    4371 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:17:30.479168    4371 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:17:30.479299    4371 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:17:30.480090    4371 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:17:30.481079    4371 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 17:17:30.481917    4371 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:17:30.482431    4371 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:17:30.483214    4371 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 17:17:30.483677    4371 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 17:17:30.485784    4371 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 17:17:30.485784    4371 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:17:30.892508    4371 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:17:30.905296    4371 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0923 17:17:30.905325    4371 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:17:30.905401    4371 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:17:30.917039    4371 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0923 17:17:30.918905    4371 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:17:30.923024    4371 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:17:30.924893    4371 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:17:30.936051    4371 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0923 17:17:30.936073    4371 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:17:30.936138    4371 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:17:30.937161    4371 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0923 17:17:30.937172    4371 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:17:30.937214    4371 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:17:30.951501    4371 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0923 17:17:30.951523    4371 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:17:30.951595    4371 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:17:30.956804    4371 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0923 17:17:30.958455    4371 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0923 17:17:30.964722    4371 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0923 17:17:30.966130    4371 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0923 17:17:30.974547    4371 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0923 17:17:30.974570    4371 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0923 17:17:30.974638    4371 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0923 17:17:30.984873    4371 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0923 17:17:30.985022    4371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0923 17:17:30.986752    4371 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0923 17:17:30.986765    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0923 17:17:30.986856    4371 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0923 17:17:30.986969    4371 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:17:30.994977    4371 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0923 17:17:30.994990    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0923 17:17:31.001741    4371 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0923 17:17:31.001765    4371 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:17:31.001826    4371 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:17:31.007839    4371 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0923 17:17:31.033937    4371 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0923 17:17:31.033956    4371 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 17:17:31.033973    4371 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0923 17:17:31.033992    4371 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0923 17:17:31.034040    4371 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0923 17:17:31.034067    4371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0923 17:17:31.044030    4371 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0923 17:17:31.044112    4371 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0923 17:17:31.044126    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0923 17:17:31.044156    4371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0923 17:17:31.053941    4371 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0923 17:17:31.053976    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0923 17:17:31.122612    4371 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0923 17:17:31.122627    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0923 17:17:31.229548    4371 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0923 17:17:31.333206    4371 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0923 17:17:31.333326    4371 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:17:31.344835    4371 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0923 17:17:31.344850    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0923 17:17:31.353425    4371 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0923 17:17:31.353456    4371 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:17:31.353531    4371 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:17:31.507587    4371 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0923 17:17:32.257738    4371 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0923 17:17:32.258316    4371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0923 17:17:32.263260    4371 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0923 17:17:32.263317    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0923 17:17:32.324691    4371 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0923 17:17:32.324706    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0923 17:17:32.552974    4371 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0923 17:17:32.553015    4371 cache_images.go:92] duration metric: took 2.08560425s to LoadCachedImages
	W0923 17:17:32.553053    4371 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0923 17:17:32.553060    4371 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0923 17:17:32.553124    4371 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-903000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 17:17:32.553206    4371 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 17:17:32.566964    4371 cni.go:84] Creating CNI manager for ""
	I0923 17:17:32.566988    4371 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:17:32.566994    4371 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 17:17:32.567002    4371 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-903000 NodeName:running-upgrade-903000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 17:17:32.567064    4371 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-903000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 17:17:32.567133    4371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0923 17:17:32.570091    4371 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 17:17:32.570132    4371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 17:17:32.573073    4371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0923 17:17:32.577989    4371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 17:17:32.582895    4371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0923 17:17:32.588271    4371 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0923 17:17:32.589683    4371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:17:32.657250    4371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 17:17:32.662329    4371 certs.go:68] Setting up /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000 for IP: 10.0.2.15
	I0923 17:17:32.662338    4371 certs.go:194] generating shared ca certs ...
	I0923 17:17:32.662346    4371 certs.go:226] acquiring lock for ca certs: {Name:mk0bd8a887d4e289277fd6cf7c9ed1b474966431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:17:32.662513    4371 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.key
	I0923 17:17:32.662560    4371 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.key
	I0923 17:17:32.662569    4371 certs.go:256] generating profile certs ...
	I0923 17:17:32.662632    4371 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/client.key
	I0923 17:17:32.662650    4371 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.key.23d199f5
	I0923 17:17:32.662666    4371 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.crt.23d199f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0923 17:17:32.843196    4371 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.crt.23d199f5 ...
	I0923 17:17:32.843209    4371 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.crt.23d199f5: {Name:mkbb0270da57cf30c35c45a7ff2ce6ac1628801b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:17:32.843473    4371 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.key.23d199f5 ...
	I0923 17:17:32.843479    4371 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.key.23d199f5: {Name:mkdd9efc3ec7bce80c477a38543e3e1e19e5c69f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:17:32.843614    4371 certs.go:381] copying /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.crt.23d199f5 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.crt
	I0923 17:17:32.843756    4371 certs.go:385] copying /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.key.23d199f5 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.key
	I0923 17:17:32.844092    4371 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/proxy-client.key
	I0923 17:17:32.844243    4371 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/1596.pem (1338 bytes)
	W0923 17:17:32.844278    4371 certs.go:480] ignoring /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/1596_empty.pem, impossibly tiny 0 bytes
	I0923 17:17:32.844284    4371 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 17:17:32.844303    4371 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem (1082 bytes)
	I0923 17:17:32.844324    4371 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem (1123 bytes)
	I0923 17:17:32.844342    4371 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem (1679 bytes)
	I0923 17:17:32.844380    4371 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem (1708 bytes)
	I0923 17:17:32.844709    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 17:17:32.864820    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 17:17:32.871781    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 17:17:32.879649    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 17:17:32.899190    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 17:17:32.914245    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 17:17:32.927251    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 17:17:32.952419    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 17:17:32.964689    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/1596.pem --> /usr/share/ca-certificates/1596.pem (1338 bytes)
	I0923 17:17:32.985572    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem --> /usr/share/ca-certificates/15962.pem (1708 bytes)
	I0923 17:17:33.002242    4371 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 17:17:33.013577    4371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 17:17:33.022162    4371 ssh_runner.go:195] Run: openssl version
	I0923 17:17:33.025525    4371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15962.pem && ln -fs /usr/share/ca-certificates/15962.pem /etc/ssl/certs/15962.pem"
	I0923 17:17:33.029041    4371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15962.pem
	I0923 17:17:33.031288    4371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:53 /usr/share/ca-certificates/15962.pem
	I0923 17:17:33.031315    4371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15962.pem
	I0923 17:17:33.034830    4371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15962.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 17:17:33.041709    4371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 17:17:33.048772    4371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 17:17:33.053454    4371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0923 17:17:33.053487    4371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 17:17:33.061310    4371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 17:17:33.068639    4371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1596.pem && ln -fs /usr/share/ca-certificates/1596.pem /etc/ssl/certs/1596.pem"
	I0923 17:17:33.077754    4371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1596.pem
	I0923 17:17:33.082487    4371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:53 /usr/share/ca-certificates/1596.pem
	I0923 17:17:33.082529    4371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1596.pem
	I0923 17:17:33.092385    4371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1596.pem /etc/ssl/certs/51391683.0"
	I0923 17:17:33.104330    4371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 17:17:33.109634    4371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 17:17:33.119376    4371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 17:17:33.132054    4371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 17:17:33.137678    4371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 17:17:33.139827    4371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 17:17:33.141774    4371 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 17:17:33.143784    4371 kubeadm.go:392] StartCluster: {Name:running-upgrade-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 17:17:33.143875    4371 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 17:17:33.163221    4371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 17:17:33.171444    4371 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 17:17:33.171460    4371 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 17:17:33.171508    4371 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 17:17:33.177832    4371 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 17:17:33.178094    4371 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-903000" does not appear in /Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:17:33.178142    4371 kubeconfig.go:62] /Users/jenkins/minikube-integration/19696-1109/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-903000" cluster setting kubeconfig missing "running-upgrade-903000" context setting]
	I0923 17:17:33.178281    4371 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/kubeconfig: {Name:mk52c76fc8ff944a7bcab52e821c0354dabfa3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:17:33.178668    4371 kapi.go:59] client config for running-upgrade-903000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/client.key", CAFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106966030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 17:17:33.178992    4371 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 17:17:33.184503    4371 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-903000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0923 17:17:33.184511    4371 kubeadm.go:1160] stopping kube-system containers ...
	I0923 17:17:33.184583    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 17:17:33.214886    4371 docker.go:483] Stopping containers: [a84de2b73e49 8b9a027a5b5d ea8914f0f7c5 3b316c561070 a35db5d3c1ef 4757d75a4b64 86f1da3a8f61 0fe4da17c7a3 2a4897361969 a333c0b735e9 0f0ee37c7f47 438482b767a6 56fe54df9cf9 3b795adb2a40 87eedd36621a 7a6ccedacf07 69174ba164ed 0477ab4b40c4 219fa3ad8fb8 2d0ee88c8b39]
	I0923 17:17:33.214994    4371 ssh_runner.go:195] Run: docker stop a84de2b73e49 8b9a027a5b5d ea8914f0f7c5 3b316c561070 a35db5d3c1ef 4757d75a4b64 86f1da3a8f61 0fe4da17c7a3 2a4897361969 a333c0b735e9 0f0ee37c7f47 438482b767a6 56fe54df9cf9 3b795adb2a40 87eedd36621a 7a6ccedacf07 69174ba164ed 0477ab4b40c4 219fa3ad8fb8 2d0ee88c8b39
	I0923 17:17:33.799755    4371 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 17:17:33.880622    4371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 17:17:33.883846    4371 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 24 00:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 24 00:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 24 00:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 24 00:17 /etc/kubernetes/scheduler.conf
	
	I0923 17:17:33.883875    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0923 17:17:33.886564    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 17:17:33.886601    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 17:17:33.889302    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0923 17:17:33.891899    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 17:17:33.891930    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 17:17:33.894634    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0923 17:17:33.897460    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 17:17:33.897487    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 17:17:33.900211    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0923 17:17:33.903139    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 17:17:33.903174    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 17:17:33.907848    4371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 17:17:33.911373    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:17:33.933681    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:17:34.566506    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:17:34.775878    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:17:34.795891    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:17:34.817003    4371 api_server.go:52] waiting for apiserver process to appear ...
	I0923 17:17:34.817085    4371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:17:35.319194    4371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:17:35.819173    4371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:17:35.823861    4371 api_server.go:72] duration metric: took 1.006865625s to wait for apiserver process to appear ...
	I0923 17:17:35.823870    4371 api_server.go:88] waiting for apiserver healthz status ...
	I0923 17:17:35.823880    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:17:40.825909    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:17:40.825951    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:17:45.826352    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:17:45.826422    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:17:50.827194    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:17:50.827284    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:17:55.828433    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:17:55.828533    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:18:00.830094    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:18:00.830198    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:18:05.832205    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:18:05.832307    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:18:10.834859    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:18:10.834949    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:18:15.837581    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:18:15.837674    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:18:20.840372    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:18:20.840478    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:18:25.843169    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:18:25.843277    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:18:30.845579    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:18:30.845684    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:18:35.846739    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:18:35.847085    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:18:35.878461    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:18:35.878619    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:18:35.895172    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:18:35.895262    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:18:35.909815    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:18:35.909905    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:18:35.920578    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:18:35.920667    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:18:35.931189    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:18:35.931264    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:18:35.941692    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:18:35.941774    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:18:35.951457    4371 logs.go:276] 0 containers: []
	W0923 17:18:35.951469    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:18:35.951536    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:18:35.962279    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:18:35.962296    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:18:35.962301    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:18:36.032790    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:18:36.032804    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:18:36.044529    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:18:36.044548    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:18:36.058359    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:18:36.058369    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:18:36.098879    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:18:36.098888    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:18:36.103419    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:18:36.103428    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:18:36.114086    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:18:36.114096    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:18:36.127312    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:18:36.127325    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:18:36.141604    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:18:36.141615    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:18:36.152229    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:18:36.152241    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:18:36.162986    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:18:36.162996    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:18:36.176606    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:18:36.176615    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:18:36.193464    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:18:36.193475    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:18:36.204994    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:18:36.205005    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:18:36.217023    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:18:36.217033    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:18:36.243550    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:18:36.243558    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:18:36.255809    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:18:36.255819    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:18:38.769198    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:18:43.771625    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:18:43.772278    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:18:43.810149    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:18:43.810322    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:18:43.831148    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:18:43.831249    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:18:43.846738    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:18:43.846831    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:18:43.862277    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:18:43.862352    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:18:43.872754    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:18:43.872828    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:18:43.886712    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:18:43.886786    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:18:43.897349    4371 logs.go:276] 0 containers: []
	W0923 17:18:43.897359    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:18:43.897426    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:18:43.907861    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:18:43.907880    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:18:43.907885    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:18:43.925006    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:18:43.925015    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:18:43.936574    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:18:43.936590    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:18:43.947598    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:18:43.947611    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:18:43.965355    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:18:43.965367    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:18:43.976675    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:18:43.976688    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:18:44.001682    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:18:44.001690    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:18:44.014126    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:18:44.014140    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:18:44.025530    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:18:44.025541    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:18:44.062526    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:18:44.062539    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:18:44.076107    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:18:44.076118    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:18:44.088742    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:18:44.088751    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:18:44.101050    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:18:44.101060    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:18:44.112834    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:18:44.112842    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:18:44.152885    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:18:44.152893    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:18:44.156892    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:18:44.156899    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:18:44.168596    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:18:44.168607    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:18:46.680763    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:18:51.683025    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:18:51.683541    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:18:51.717002    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:18:51.717165    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:18:51.736860    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:18:51.737023    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:18:51.751508    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:18:51.751610    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:18:51.763326    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:18:51.763403    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:18:51.774230    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:18:51.774320    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:18:51.785531    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:18:51.785616    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:18:51.795809    4371 logs.go:276] 0 containers: []
	W0923 17:18:51.795825    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:18:51.795903    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:18:51.806731    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:18:51.806753    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:18:51.806758    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:18:51.849557    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:18:51.849568    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:18:51.854027    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:18:51.854033    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:18:51.874770    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:18:51.874783    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:18:51.892302    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:18:51.892318    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:18:51.927865    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:18:51.927877    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:18:51.941908    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:18:51.941919    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:18:51.953671    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:18:51.953684    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:18:51.965131    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:18:51.965142    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:18:51.977799    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:18:51.977809    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:18:51.992051    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:18:51.992060    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:18:52.007675    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:18:52.007684    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:18:52.031542    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:18:52.031553    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:18:52.042906    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:18:52.042919    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:18:52.057699    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:18:52.057712    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:18:52.076434    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:18:52.076445    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:18:52.088138    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:18:52.088151    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:18:54.615114    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:18:59.617972    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:18:59.618528    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:18:59.660469    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:18:59.660626    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:18:59.684386    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:18:59.684508    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:18:59.701711    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:18:59.701793    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:18:59.714278    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:18:59.714371    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:18:59.724959    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:18:59.725048    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:18:59.735792    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:18:59.735875    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:18:59.746208    4371 logs.go:276] 0 containers: []
	W0923 17:18:59.746221    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:18:59.746289    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:18:59.757245    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:18:59.757266    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:18:59.757272    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:18:59.768763    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:18:59.768776    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:18:59.780258    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:18:59.780269    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:18:59.792899    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:18:59.792911    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:18:59.804698    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:18:59.804712    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:18:59.844273    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:18:59.844281    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:18:59.855705    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:18:59.855715    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:18:59.867509    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:18:59.867520    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:18:59.878650    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:18:59.878660    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:18:59.903805    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:18:59.903812    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:18:59.907793    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:18:59.907802    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:18:59.919418    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:18:59.919429    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:18:59.932991    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:18:59.933000    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:18:59.950673    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:18:59.950684    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:18:59.961926    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:18:59.961946    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:18:59.988473    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:18:59.988487    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:19:00.024348    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:19:00.024360    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:19:02.548613    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:19:07.550969    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:19:07.551159    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:19:07.570015    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:19:07.570106    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:19:07.581641    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:19:07.581734    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:19:07.592928    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:19:07.593009    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:19:07.604383    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:19:07.604472    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:19:07.615184    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:19:07.615273    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:19:07.626833    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:19:07.626910    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:19:07.637699    4371 logs.go:276] 0 containers: []
	W0923 17:19:07.637710    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:19:07.637783    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:19:07.648991    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:19:07.649007    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:19:07.649012    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:19:07.687141    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:19:07.687154    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:19:07.699880    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:19:07.699894    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:19:07.711486    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:19:07.711498    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:19:07.729084    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:19:07.729096    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:19:07.733800    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:19:07.733809    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:19:07.748411    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:19:07.748428    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:19:07.766294    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:19:07.766311    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:19:07.781139    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:19:07.781156    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:19:07.793066    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:19:07.793082    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:19:07.806959    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:19:07.806971    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:19:07.818937    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:19:07.818950    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:19:07.830836    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:19:07.830846    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:19:07.845908    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:19:07.845922    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:19:07.872138    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:19:07.872148    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:19:07.911607    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:19:07.911615    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:19:07.927874    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:19:07.927886    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:19:10.441524    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:19:15.444015    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:19:15.444393    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:19:15.477996    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:19:15.478135    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:19:15.494528    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:19:15.494636    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:19:15.507091    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:19:15.507188    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:19:15.518861    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:19:15.518947    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:19:15.529357    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:19:15.529428    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:19:15.539535    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:19:15.539611    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:19:15.549644    4371 logs.go:276] 0 containers: []
	W0923 17:19:15.549655    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:19:15.549718    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:19:15.560211    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:19:15.560226    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:19:15.560231    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:19:15.595553    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:19:15.595579    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:19:15.607279    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:19:15.607294    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:19:15.618790    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:19:15.618802    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:19:15.632572    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:19:15.632582    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:19:15.644649    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:19:15.644658    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:19:15.656562    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:19:15.656573    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:19:15.684295    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:19:15.684305    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:19:15.695416    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:19:15.695427    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:19:15.706917    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:19:15.706930    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:19:15.731346    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:19:15.731355    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:19:15.742728    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:19:15.742738    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:19:15.782912    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:19:15.782919    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:19:15.787415    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:19:15.787421    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:19:15.801189    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:19:15.801199    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:19:15.812880    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:19:15.812895    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:19:15.823745    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:19:15.823754    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:19:18.336896    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:19:23.339634    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:19:23.340087    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:19:23.374683    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:19:23.374856    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:19:23.394589    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:19:23.394699    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:19:23.411962    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:19:23.412061    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:19:23.423440    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:19:23.423516    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:19:23.434015    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:19:23.434097    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:19:23.444793    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:19:23.444869    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:19:23.455465    4371 logs.go:276] 0 containers: []
	W0923 17:19:23.455477    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:19:23.455542    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:19:23.468845    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:19:23.468864    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:19:23.468891    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:19:23.510518    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:19:23.510530    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:19:23.524491    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:19:23.524500    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:19:23.535639    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:19:23.535651    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:19:23.547195    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:19:23.547207    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:19:23.551894    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:19:23.551904    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:19:23.567179    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:19:23.567190    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:19:23.578355    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:19:23.578365    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:19:23.590136    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:19:23.590146    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:19:23.607171    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:19:23.607185    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:19:23.618994    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:19:23.619004    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:19:23.631780    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:19:23.631792    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:19:23.642701    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:19:23.642711    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:19:23.653491    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:19:23.653502    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:19:23.664897    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:19:23.664912    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:19:23.700329    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:19:23.700339    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:19:23.711585    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:19:23.711601    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:19:26.239718    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:19:31.242074    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:19:31.242664    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:19:31.283614    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:19:31.283802    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:19:31.305325    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:19:31.305472    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:19:31.326266    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:19:31.326359    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:19:31.339717    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:19:31.339792    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:19:31.351097    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:19:31.351169    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:19:31.362036    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:19:31.362106    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:19:31.372608    4371 logs.go:276] 0 containers: []
	W0923 17:19:31.372621    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:19:31.372696    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:19:31.385844    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:19:31.385859    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:19:31.385865    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:19:31.403700    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:19:31.403714    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:19:31.416193    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:19:31.416207    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:19:31.430408    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:19:31.430418    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:19:31.443441    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:19:31.443452    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:19:31.455269    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:19:31.455279    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:19:31.466952    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:19:31.466961    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:19:31.502449    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:19:31.502467    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:19:31.516825    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:19:31.516834    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:19:31.527919    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:19:31.527931    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:19:31.547818    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:19:31.547827    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:19:31.572248    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:19:31.572256    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:19:31.611666    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:19:31.611673    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:19:31.616014    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:19:31.616022    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:19:31.627896    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:19:31.627907    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:19:31.639333    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:19:31.639345    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:19:31.651434    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:19:31.651447    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:19:34.165170    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:19:39.167880    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:19:39.168134    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:19:39.189262    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:19:39.189358    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:19:39.211043    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:19:39.211130    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:19:39.221776    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:19:39.221862    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:19:39.232564    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:19:39.232646    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:19:39.243085    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:19:39.243170    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:19:39.255029    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:19:39.255117    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:19:39.268879    4371 logs.go:276] 0 containers: []
	W0923 17:19:39.268889    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:19:39.268955    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:19:39.279583    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:19:39.279600    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:19:39.279606    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:19:39.290714    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:19:39.290726    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:19:39.307601    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:19:39.307612    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:19:39.318321    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:19:39.318334    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:19:39.361544    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:19:39.361559    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:19:39.375643    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:19:39.375652    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:19:39.390376    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:19:39.390388    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:19:39.405385    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:19:39.405399    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:19:39.409640    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:19:39.409645    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:19:39.421087    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:19:39.421096    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:19:39.433770    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:19:39.433780    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:19:39.445500    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:19:39.445515    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:19:39.457208    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:19:39.457218    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:19:39.468504    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:19:39.468513    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:19:39.493962    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:19:39.493969    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:19:39.505905    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:19:39.505919    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:19:39.548227    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:19:39.548234    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:19:42.063868    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:19:47.066200    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:19:47.066352    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:19:47.083285    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:19:47.083380    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:19:47.095187    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:19:47.095279    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:19:47.107871    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:19:47.107955    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:19:47.119915    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:19:47.120010    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:19:47.132495    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:19:47.132576    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:19:47.144602    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:19:47.144697    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:19:47.158046    4371 logs.go:276] 0 containers: []
	W0923 17:19:47.158065    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:19:47.158145    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:19:47.172208    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:19:47.172228    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:19:47.172234    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:19:47.177317    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:19:47.177332    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:19:47.201346    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:19:47.201359    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:19:47.247821    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:19:47.247835    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:19:47.262589    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:19:47.262600    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:19:47.274382    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:19:47.274393    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:19:47.286367    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:19:47.286377    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:19:47.304737    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:19:47.304748    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:19:47.317670    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:19:47.317682    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:19:47.331708    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:19:47.331724    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:19:47.358696    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:19:47.358706    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:19:47.370487    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:19:47.370504    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:19:47.386322    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:19:47.386333    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:19:47.399713    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:19:47.399726    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:19:47.412954    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:19:47.412970    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:19:47.434663    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:19:47.434676    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:19:47.448387    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:19:47.448403    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:19:49.990642    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:19:54.993000    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:19:54.993285    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:19:55.017126    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:19:55.017268    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:19:55.032832    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:19:55.032932    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:19:55.046215    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:19:55.046300    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:19:55.057953    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:19:55.058041    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:19:55.070223    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:19:55.070307    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:19:55.080955    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:19:55.081037    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:19:55.090632    4371 logs.go:276] 0 containers: []
	W0923 17:19:55.090644    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:19:55.090715    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:19:55.100318    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:19:55.100336    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:19:55.100341    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:19:55.112177    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:19:55.112192    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:19:55.137435    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:19:55.137442    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:19:55.151720    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:19:55.151730    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:19:55.169057    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:19:55.169067    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:19:55.182462    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:19:55.182478    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:19:55.194539    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:19:55.194549    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:19:55.205364    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:19:55.205376    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:19:55.220946    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:19:55.220956    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:19:55.240916    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:19:55.240929    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:19:55.244978    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:19:55.244987    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:19:55.279322    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:19:55.279337    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:19:55.293362    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:19:55.293372    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:19:55.307678    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:19:55.307694    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:19:55.349136    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:19:55.349144    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:19:55.361665    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:19:55.361675    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:19:55.377049    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:19:55.377058    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:19:57.890629    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:02.891093    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:02.891194    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:02.902802    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:02.902894    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:02.914179    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:02.914293    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:02.926477    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:02.926667    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:02.940161    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:02.940241    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:02.963478    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:02.963558    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:02.975076    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:02.975155    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:02.986416    4371 logs.go:276] 0 containers: []
	W0923 17:20:02.986426    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:02.986495    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:02.998240    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:02.998262    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:02.998268    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:03.010841    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:03.010856    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:03.027114    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:03.027126    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:03.042054    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:03.042066    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:03.058606    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:03.058618    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:03.096562    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:03.096573    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:03.109540    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:03.109551    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:03.121684    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:03.121696    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:03.133094    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:03.133105    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:03.144746    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:03.144762    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:03.157886    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:03.157896    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:03.184722    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:03.184730    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:03.188940    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:03.188945    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:20:03.203225    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:03.203240    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:03.221331    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:03.221346    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:03.232677    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:03.232690    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:03.244737    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:03.244752    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:05.789868    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:10.792118    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:10.792310    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:10.805569    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:10.805662    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:10.817813    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:10.817900    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:10.829037    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:10.829118    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:10.839963    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:10.840044    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:10.850315    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:10.850404    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:10.869639    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:10.869723    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:10.879843    4371 logs.go:276] 0 containers: []
	W0923 17:20:10.879856    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:10.879926    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:10.890625    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:10.890646    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:10.890651    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:10.902283    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:10.902297    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:10.921479    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:10.921492    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:10.934482    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:10.934497    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:10.978163    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:10.978187    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:10.983206    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:10.983215    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:10.997919    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:10.997932    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:11.009117    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:11.009133    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:11.021617    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:11.021629    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:11.056931    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:11.056944    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:20:11.070653    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:11.070668    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:11.088563    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:11.088576    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:11.100513    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:11.100528    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:11.111906    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:11.111919    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:11.127068    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:11.127084    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:11.138544    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:11.138555    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:11.150361    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:11.150373    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:13.676122    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:18.678374    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:18.678959    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:18.728476    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:18.728618    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:18.744706    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:18.744813    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:18.762269    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:18.762356    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:18.772887    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:18.772964    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:18.783254    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:18.783347    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:18.793578    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:18.793673    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:18.803768    4371 logs.go:276] 0 containers: []
	W0923 17:20:18.803780    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:18.803859    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:18.814590    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:18.814614    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:18.814620    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:18.857470    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:18.857480    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:18.862308    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:18.862315    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:18.898079    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:18.898090    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:18.911455    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:18.911466    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:18.924932    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:18.924943    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:18.948390    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:18.948397    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:18.963059    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:18.963070    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:18.981262    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:18.981271    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:20:19.002875    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:19.002886    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:19.015609    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:19.015619    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:19.027010    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:19.027020    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:19.038624    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:19.038634    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:19.051253    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:19.051263    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:19.063022    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:19.063034    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:19.074311    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:19.074320    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:19.085763    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:19.085773    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:21.598226    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:26.600550    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:26.601220    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:26.643199    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:26.643367    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:26.665105    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:26.665234    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:26.683150    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:26.683250    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:26.695307    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:26.695401    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:26.706177    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:26.706269    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:26.716935    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:26.717019    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:26.727022    4371 logs.go:276] 0 containers: []
	W0923 17:20:26.727034    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:26.727109    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:26.737512    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:26.737530    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:26.737536    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:26.749170    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:26.749179    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:26.760734    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:26.760745    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:26.785609    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:26.785618    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:26.805395    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:26.805406    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:26.826010    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:26.826021    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:26.845105    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:26.845119    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:26.880393    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:26.880437    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:26.891736    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:26.891747    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:20:26.907292    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:26.907304    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:26.919293    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:26.919305    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:26.932132    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:26.932147    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:26.944957    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:26.944971    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:26.950534    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:26.950545    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:26.962313    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:26.962323    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:26.973860    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:26.973870    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:26.985101    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:26.985110    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:29.528120    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:34.530402    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:34.530606    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:34.542813    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:34.542914    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:34.554184    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:34.554281    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:34.569304    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:34.569378    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:34.579398    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:34.579483    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:34.595471    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:34.595545    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:34.606248    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:34.606327    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:34.616220    4371 logs.go:276] 0 containers: []
	W0923 17:20:34.616231    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:34.616300    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:34.626559    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:34.626578    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:34.626584    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:34.631309    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:34.631316    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:34.667523    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:34.667533    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:34.679761    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:34.679771    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:34.691597    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:34.691608    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:34.703269    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:34.703282    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:34.730104    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:34.730114    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:34.753404    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:34.753421    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:34.767783    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:34.767795    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:34.779370    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:34.779387    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:34.797436    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:34.797452    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:20:34.812035    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:34.812045    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:34.828168    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:34.828177    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:34.852281    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:34.852289    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:34.893667    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:34.893676    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:34.906349    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:34.906360    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:34.917912    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:34.917924    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:37.439866    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:42.442112    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:42.442407    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:42.463241    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:42.463389    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:42.478737    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:42.478839    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:42.490733    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:42.490822    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:42.501553    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:42.501630    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:42.516430    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:42.516510    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:42.526374    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:42.526456    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:42.537229    4371 logs.go:276] 0 containers: []
	W0923 17:20:42.537243    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:42.537301    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:42.552115    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:42.552130    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:42.552135    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:42.562954    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:42.562963    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:42.587841    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:42.587848    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:42.629283    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:42.629291    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:42.665420    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:42.665434    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:42.683813    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:42.683827    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:42.696141    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:42.696151    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:42.710427    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:42.710439    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:42.721603    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:42.721617    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:42.738832    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:42.738846    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:42.753342    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:42.753353    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:42.765629    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:42.765640    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:42.770535    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:42.770545    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:20:42.784895    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:42.784905    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:42.796092    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:42.796104    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:42.806948    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:42.806962    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:42.818375    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:42.818390    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:45.331756    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:50.332637    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:50.332754    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:50.344823    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:50.344916    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:50.356176    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:50.356266    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:50.368331    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:50.368420    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:50.383220    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:50.383309    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:50.397190    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:50.397280    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:50.409477    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:50.409576    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:50.421771    4371 logs.go:276] 0 containers: []
	W0923 17:20:50.421784    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:50.421858    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:50.433379    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:50.433396    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:50.433401    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:50.450751    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:50.450763    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:50.464057    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:50.464070    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:50.476466    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:50.476478    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:50.490248    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:50.490260    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:50.505831    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:50.505844    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:50.520047    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:50.520063    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:50.532788    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:50.532825    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:50.548615    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:50.548627    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:50.592420    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:50.592434    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:50.597773    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:50.597786    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:50.636725    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:50.636739    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:20:50.652201    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:50.652215    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:50.671603    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:50.671615    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:50.697252    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:50.697279    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:50.710647    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:50.710659    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:50.728897    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:50.728914    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:53.255713    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:58.258335    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:58.258454    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:58.271337    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:58.271421    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:58.282397    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:58.282473    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:58.294010    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:58.294083    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:58.305333    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:58.305418    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:58.316182    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:58.316264    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:58.327339    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:58.327417    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:58.338383    4371 logs.go:276] 0 containers: []
	W0923 17:20:58.338398    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:58.338470    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:58.349142    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:58.349160    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:58.349165    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:58.361517    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:58.361533    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:58.372661    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:58.372673    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:58.383805    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:58.383818    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:58.423395    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:58.423406    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:58.427592    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:58.427599    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:58.439612    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:58.439622    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:58.450822    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:58.450833    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:58.485425    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:58.485437    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:58.496807    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:58.496822    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:58.508886    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:58.508896    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:58.526398    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:58.526414    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:58.549942    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:58.549950    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:58.561883    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:58.561896    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:58.573402    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:58.573412    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:58.585109    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:58.585119    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:58.602738    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:58.602748    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:21:01.118714    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:06.120238    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:06.120524    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:21:06.142218    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:21:06.142360    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:21:06.157247    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:21:06.157333    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:21:06.169732    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:21:06.169819    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:21:06.180524    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:21:06.180607    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:21:06.191102    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:21:06.191189    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:21:06.201887    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:21:06.201974    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:21:06.212169    4371 logs.go:276] 0 containers: []
	W0923 17:21:06.212182    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:21:06.212258    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:21:06.222981    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:21:06.223001    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:21:06.223007    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:21:06.236329    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:21:06.236339    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:21:06.248182    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:21:06.248192    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:21:06.269170    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:21:06.269182    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:21:06.281299    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:21:06.281311    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:21:06.321711    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:21:06.321719    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:21:06.357003    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:21:06.357016    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:21:06.373389    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:21:06.373402    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:21:06.378253    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:21:06.378261    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:21:06.392420    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:21:06.392434    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:21:06.404032    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:21:06.404045    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:21:06.415712    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:21:06.415722    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:21:06.427228    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:21:06.427238    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:21:06.438767    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:21:06.438777    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:21:06.463026    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:21:06.463034    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:21:06.477028    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:21:06.477037    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:21:06.488328    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:21:06.488337    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:21:09.001739    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:14.003374    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:14.003537    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:21:14.014480    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:21:14.014571    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:21:14.025556    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:21:14.025645    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:21:14.036449    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:21:14.036532    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:21:14.046821    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:21:14.046906    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:21:14.057280    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:21:14.057369    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:21:14.067951    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:21:14.068035    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:21:14.078303    4371 logs.go:276] 0 containers: []
	W0923 17:21:14.078314    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:21:14.078386    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:21:14.089523    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:21:14.089540    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:21:14.089545    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:21:14.102062    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:21:14.102079    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:21:14.115942    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:21:14.115954    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:21:14.136201    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:21:14.136215    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:21:14.147865    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:21:14.147878    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:21:14.159129    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:21:14.159142    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:21:14.173232    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:21:14.173250    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:21:14.186313    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:21:14.186324    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:21:14.204010    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:21:14.204020    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:21:14.245876    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:21:14.245887    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:21:14.257603    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:21:14.257615    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:21:14.282669    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:21:14.282679    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:21:14.294270    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:21:14.294282    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:21:14.305430    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:21:14.305442    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:21:14.317143    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:21:14.317159    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:21:14.322137    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:21:14.322145    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:21:14.361885    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:21:14.361899    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:21:16.884490    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:21.886624    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:21.886939    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:21:21.914399    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:21:21.914540    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:21:21.935671    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:21:21.935770    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:21:21.948626    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:21:21.948725    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:21:21.961930    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:21:21.962017    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:21:21.972592    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:21:21.972672    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:21:21.982756    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:21:21.982851    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:21:21.993770    4371 logs.go:276] 0 containers: []
	W0923 17:21:21.993782    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:21:21.993857    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:21:22.004480    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:21:22.004499    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:21:22.004506    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:21:22.008898    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:21:22.008904    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:21:22.026276    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:21:22.026286    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:21:22.037840    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:21:22.037850    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:21:22.050180    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:21:22.050191    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:21:22.061822    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:21:22.061835    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:21:22.073482    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:21:22.073493    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:21:22.097126    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:21:22.097133    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:21:22.138953    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:21:22.138966    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:21:22.150389    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:21:22.150404    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:21:22.191089    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:21:22.191098    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:21:22.207758    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:21:22.207772    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:21:22.219071    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:21:22.219086    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:21:22.231786    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:21:22.231800    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:21:22.246652    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:21:22.246663    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:21:22.258795    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:21:22.258812    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:21:22.277210    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:21:22.277225    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:21:24.788683    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:29.791025    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:29.791256    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:21:29.810248    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:21:29.810365    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:21:29.824689    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:21:29.824779    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:21:29.835684    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:21:29.835770    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:21:29.850066    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:21:29.850155    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:21:29.864397    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:21:29.864475    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:21:29.877178    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:21:29.877251    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:21:29.890043    4371 logs.go:276] 0 containers: []
	W0923 17:21:29.890057    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:21:29.890132    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:21:29.905810    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:21:29.905827    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:21:29.905833    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:21:29.917531    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:21:29.917542    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:21:29.934541    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:21:29.934552    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:21:29.945730    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:21:29.945742    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:21:29.960920    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:21:29.960930    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:21:29.972292    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:21:29.972304    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:21:29.984331    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:21:29.984345    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:21:29.995565    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:21:29.995578    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:21:30.000045    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:21:30.000055    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:21:30.037466    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:21:30.037480    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:21:30.049259    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:21:30.049271    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:21:30.090972    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:21:30.090981    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:21:30.102857    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:21:30.102868    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:21:30.125610    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:21:30.125620    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:21:30.138085    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:21:30.138101    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:21:30.158826    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:21:30.158837    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:21:30.171299    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:21:30.171313    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:21:32.685066    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:37.685884    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:37.685944    4371 kubeadm.go:597] duration metric: took 4m4.516184375s to restartPrimaryControlPlane
	W0923 17:21:37.686004    4371 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0923 17:21:37.686029    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0923 17:21:38.644560    4371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 17:21:38.649539    4371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 17:21:38.652256    4371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 17:21:38.655061    4371 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 17:21:38.655069    4371 kubeadm.go:157] found existing configuration files:
	
	I0923 17:21:38.655100    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0923 17:21:38.657805    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 17:21:38.657838    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 17:21:38.660340    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0923 17:21:38.663015    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 17:21:38.663047    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 17:21:38.666278    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0923 17:21:38.668834    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 17:21:38.668868    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 17:21:38.671492    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0923 17:21:38.674477    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 17:21:38.674510    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 17:21:38.677117    4371 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 17:21:38.699632    4371 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0923 17:21:38.699660    4371 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 17:21:38.747623    4371 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 17:21:38.747698    4371 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 17:21:38.747748    4371 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 17:21:38.798257    4371 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 17:21:38.801577    4371 out.go:235]   - Generating certificates and keys ...
	I0923 17:21:38.801610    4371 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 17:21:38.801646    4371 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 17:21:38.801690    4371 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 17:21:38.801724    4371 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 17:21:38.801759    4371 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 17:21:38.801793    4371 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 17:21:38.801825    4371 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 17:21:38.801857    4371 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 17:21:38.801892    4371 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 17:21:38.801934    4371 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 17:21:38.801965    4371 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 17:21:38.801996    4371 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 17:21:38.926930    4371 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 17:21:39.015183    4371 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 17:21:39.133020    4371 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 17:21:39.233088    4371 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 17:21:39.267725    4371 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 17:21:39.268044    4371 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 17:21:39.268100    4371 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 17:21:39.356403    4371 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 17:21:39.360444    4371 out.go:235]   - Booting up control plane ...
	I0923 17:21:39.360490    4371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 17:21:39.360545    4371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 17:21:39.360580    4371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 17:21:39.360847    4371 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 17:21:39.361612    4371 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 17:21:43.864087    4371 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502184 seconds
	I0923 17:21:43.864190    4371 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 17:21:43.868536    4371 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 17:21:44.376408    4371 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 17:21:44.376564    4371 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-903000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 17:21:44.882275    4371 kubeadm.go:310] [bootstrap-token] Using token: rwu6gf.h8ide94f0miso0i5
	I0923 17:21:44.888071    4371 out.go:235]   - Configuring RBAC rules ...
	I0923 17:21:44.888208    4371 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 17:21:44.893003    4371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 17:21:44.896162    4371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 17:21:44.897059    4371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 17:21:44.897996    4371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 17:21:44.898786    4371 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 17:21:44.902285    4371 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 17:21:45.060064    4371 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 17:21:45.295159    4371 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 17:21:45.295690    4371 kubeadm.go:310] 
	I0923 17:21:45.295720    4371 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 17:21:45.295764    4371 kubeadm.go:310] 
	I0923 17:21:45.295807    4371 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 17:21:45.295811    4371 kubeadm.go:310] 
	I0923 17:21:45.295823    4371 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 17:21:45.295872    4371 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 17:21:45.295953    4371 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 17:21:45.295956    4371 kubeadm.go:310] 
	I0923 17:21:45.295985    4371 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 17:21:45.295987    4371 kubeadm.go:310] 
	I0923 17:21:45.296021    4371 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 17:21:45.296025    4371 kubeadm.go:310] 
	I0923 17:21:45.296082    4371 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 17:21:45.296139    4371 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 17:21:45.296215    4371 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 17:21:45.296225    4371 kubeadm.go:310] 
	I0923 17:21:45.296284    4371 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 17:21:45.296325    4371 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 17:21:45.296329    4371 kubeadm.go:310] 
	I0923 17:21:45.296393    4371 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rwu6gf.h8ide94f0miso0i5 \
	I0923 17:21:45.296450    4371 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9f5effcd2afcb047ae3a6a2be3abef4aeae2e1c83fa3875bd26ffc0e053ab789 \
	I0923 17:21:45.296461    4371 kubeadm.go:310] 	--control-plane 
	I0923 17:21:45.296465    4371 kubeadm.go:310] 
	I0923 17:21:45.296505    4371 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 17:21:45.296508    4371 kubeadm.go:310] 
	I0923 17:21:45.296553    4371 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rwu6gf.h8ide94f0miso0i5 \
	I0923 17:21:45.296628    4371 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9f5effcd2afcb047ae3a6a2be3abef4aeae2e1c83fa3875bd26ffc0e053ab789 
	I0923 17:21:45.296690    4371 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 17:21:45.296697    4371 cni.go:84] Creating CNI manager for ""
	I0923 17:21:45.296704    4371 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:21:45.299861    4371 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 17:21:45.306894    4371 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 17:21:45.309851    4371 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 17:21:45.314662    4371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 17:21:45.314743    4371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 17:21:45.314774    4371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-903000 minikube.k8s.io/updated_at=2024_09_23T17_21_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=running-upgrade-903000 minikube.k8s.io/primary=true
	I0923 17:21:45.358663    4371 ops.go:34] apiserver oom_adj: -16
	I0923 17:21:45.358715    4371 kubeadm.go:1113] duration metric: took 44.017875ms to wait for elevateKubeSystemPrivileges
	I0923 17:21:45.358727    4371 kubeadm.go:394] duration metric: took 4m12.216707125s to StartCluster
	I0923 17:21:45.358737    4371 settings.go:142] acquiring lock: {Name:mk533b8e20cbdc896b9e0666ee546603a1b156f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:21:45.358827    4371 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:21:45.359207    4371 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/kubeconfig: {Name:mk52c76fc8ff944a7bcab52e821c0354dabfa3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:21:45.359392    4371 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:21:45.359417    4371 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 17:21:45.359529    4371 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-903000"
	I0923 17:21:45.359539    4371 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-903000"
	W0923 17:21:45.359542    4371 addons.go:243] addon storage-provisioner should already be in state true
	I0923 17:21:45.359551    4371 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-903000"
	I0923 17:21:45.359554    4371 host.go:66] Checking if "running-upgrade-903000" exists ...
	I0923 17:21:45.359559    4371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-903000"
	I0923 17:21:45.359552    4371 config.go:182] Loaded profile config "running-upgrade-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:21:45.360568    4371 kapi.go:59] client config for running-upgrade-903000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/client.key", CAFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106966030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 17:21:45.360688    4371 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-903000"
	W0923 17:21:45.360693    4371 addons.go:243] addon default-storageclass should already be in state true
	I0923 17:21:45.360699    4371 host.go:66] Checking if "running-upgrade-903000" exists ...
	I0923 17:21:45.363761    4371 out.go:177] * Verifying Kubernetes components...
	I0923 17:21:45.364070    4371 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 17:21:45.367969    4371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 17:21:45.367976    4371 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 17:21:45.373026    4371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:21:45.373228    4371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:21:45.376939    4371 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 17:21:45.376955    4371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 17:21:45.376970    4371 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 17:21:45.456269    4371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 17:21:45.460949    4371 api_server.go:52] waiting for apiserver process to appear ...
	I0923 17:21:45.461002    4371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:21:45.464800    4371 api_server.go:72] duration metric: took 105.39825ms to wait for apiserver process to appear ...
	I0923 17:21:45.464807    4371 api_server.go:88] waiting for apiserver healthz status ...
	I0923 17:21:45.464814    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:45.494930    4371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 17:21:45.498236    4371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 17:21:45.821292    4371 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 17:21:45.821303    4371 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 17:21:50.465202    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:50.465278    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:55.465726    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:55.465772    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:00.466124    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:00.466151    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:05.466561    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:05.466597    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:10.466757    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:10.466782    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:15.466974    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:15.467012    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0923 17:22:15.823580    4371 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0923 17:22:15.828232    4371 out.go:177] * Enabled addons: storage-provisioner
	I0923 17:22:15.836191    4371 addons.go:510] duration metric: took 30.476988542s for enable addons: enabled=[storage-provisioner]
	I0923 17:22:20.467248    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:20.467281    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:25.468070    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:25.468113    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:30.468668    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:30.468708    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:35.469441    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:35.469464    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:40.469734    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:40.469763    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:45.470776    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:45.470877    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:45.481918    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:22:45.482003    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:45.492126    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:22:45.492215    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:45.502683    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:22:45.502776    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:45.513317    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:22:45.513404    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:45.530156    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:22:45.530244    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:45.541299    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:22:45.541387    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:45.551832    4371 logs.go:276] 0 containers: []
	W0923 17:22:45.551845    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:45.551915    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:45.562597    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:22:45.562612    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:22:45.562618    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:22:45.574405    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:22:45.574420    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:22:45.586614    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:22:45.586624    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:22:45.598857    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:45.598867    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:45.624045    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:22:45.624055    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:45.635764    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:22:45.635775    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:22:45.649925    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:45.649935    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:45.654259    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:45.654266    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:45.695043    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:22:45.695053    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:22:45.713724    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:22:45.713735    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:22:45.728614    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:22:45.728625    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:22:45.744859    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:22:45.744870    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:22:45.762889    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:45.762901    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:48.303697    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:53.306311    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:53.306413    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:53.317619    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:22:53.317705    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:53.329122    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:22:53.329212    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:53.341126    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:22:53.341269    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:53.354643    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:22:53.354733    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:53.366176    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:22:53.366262    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:53.378544    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:22:53.378631    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:53.388616    4371 logs.go:276] 0 containers: []
	W0923 17:22:53.388626    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:53.388696    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:53.403497    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:22:53.403510    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:53.403518    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:53.408863    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:22:53.408869    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:22:53.421052    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:22:53.421062    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:22:53.439062    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:22:53.439077    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:53.450854    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:53.450869    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:53.490589    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:22:53.490600    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:22:53.504598    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:22:53.504613    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:22:53.520628    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:22:53.520641    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:22:53.532447    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:22:53.532463    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:22:53.546996    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:22:53.547010    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:22:53.558918    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:22:53.558931    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:22:53.570478    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:53.570491    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:53.595358    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:53.595366    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:56.129734    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:01.132039    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:01.132141    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:01.145667    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:01.145754    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:01.161114    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:01.161201    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:01.172038    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:01.172121    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:01.183046    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:01.183135    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:01.193840    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:01.193925    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:01.204954    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:01.205036    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:01.216165    4371 logs.go:276] 0 containers: []
	W0923 17:23:01.216182    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:01.216261    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:01.227938    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:01.227953    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:01.227960    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:01.253742    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:01.253762    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:01.291083    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:01.291100    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:01.303844    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:01.303856    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:01.318324    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:01.318337    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:01.332356    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:01.332372    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:01.343600    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:01.343612    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:01.355169    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:01.355184    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:01.370619    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:01.370634    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:01.390137    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:01.390149    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:01.429368    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:01.429375    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:01.434006    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:01.434015    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:01.446337    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:01.446348    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:03.959524    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:08.961688    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:08.961784    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:08.973160    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:08.973250    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:08.985023    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:08.985105    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:08.996385    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:08.996470    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:09.012625    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:09.012711    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:09.023921    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:09.024006    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:09.035578    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:09.035679    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:09.047057    4371 logs.go:276] 0 containers: []
	W0923 17:23:09.047070    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:09.047144    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:09.062667    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:09.062685    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:09.062691    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:09.081854    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:09.081862    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:09.094651    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:09.094663    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:09.106784    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:09.106797    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:09.125528    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:09.125539    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:09.149905    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:09.149916    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:09.189034    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:09.189051    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:09.194016    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:09.194032    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:09.232123    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:09.232139    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:09.247584    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:09.247594    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:09.262835    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:09.262850    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:09.274718    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:09.274732    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:09.285592    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:09.285610    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:11.799566    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:16.801724    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:16.801799    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:16.813186    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:16.813272    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:16.824858    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:16.824977    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:16.835949    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:16.836002    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:16.847686    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:16.847746    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:16.859024    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:16.859101    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:16.869976    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:16.870065    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:16.882113    4371 logs.go:276] 0 containers: []
	W0923 17:23:16.882125    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:16.882213    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:16.898123    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:16.898138    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:16.898143    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:16.935156    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:16.935170    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:16.950101    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:16.950112    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:16.962695    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:16.962707    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:16.989269    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:16.989287    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:17.002247    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:17.002266    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:17.015013    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:17.015031    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:17.033639    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:17.033653    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:17.075472    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:17.075492    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:17.080983    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:17.080994    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:17.096902    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:17.096912    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:17.109416    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:17.109429    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:17.121711    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:17.121725    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:19.644417    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:24.646692    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:24.647015    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:24.675276    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:24.675427    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:24.697744    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:24.697806    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:24.711717    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:24.711774    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:24.723255    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:24.723329    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:24.734852    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:24.734915    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:24.746324    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:24.746377    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:24.757318    4371 logs.go:276] 0 containers: []
	W0923 17:23:24.757326    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:24.757363    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:24.768810    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:24.768826    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:24.768831    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:24.785063    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:24.785081    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:24.800688    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:24.800706    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:24.813453    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:24.813465    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:24.829522    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:24.829533    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:24.842687    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:24.842699    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:24.885027    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:24.885041    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:24.922271    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:24.922283    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:24.939234    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:24.939248    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:24.957119    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:24.957129    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:24.969453    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:24.969465    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:24.994005    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:24.994021    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:24.998633    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:24.998645    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:27.513447    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:32.515796    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:32.516118    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:32.539412    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:32.539546    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:32.554544    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:32.554638    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:32.567366    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:32.567460    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:32.581907    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:32.582012    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:32.592377    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:32.592464    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:32.603574    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:32.603658    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:32.614006    4371 logs.go:276] 0 containers: []
	W0923 17:23:32.614017    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:32.614085    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:32.625014    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:32.625029    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:32.625035    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:32.629819    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:32.629831    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:32.642260    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:32.642275    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:32.660263    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:32.660277    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:32.672695    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:32.672708    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:32.685738    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:32.685749    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:32.699887    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:32.699898    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:32.724735    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:32.724754    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:32.764003    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:32.764025    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:32.801788    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:32.801800    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:32.831350    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:32.831367    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:32.846419    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:32.846430    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:32.858605    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:32.858613    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:35.376294    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:40.378687    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:40.378967    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:40.399332    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:40.399455    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:40.417680    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:40.417782    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:40.429364    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:40.429456    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:40.440187    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:40.440283    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:40.450816    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:40.450909    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:40.461448    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:40.461526    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:40.474343    4371 logs.go:276] 0 containers: []
	W0923 17:23:40.474356    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:40.474436    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:40.491008    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:40.491024    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:40.491030    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:40.530319    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:40.530332    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:40.534882    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:40.534888    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:40.572649    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:40.572659    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:40.590857    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:40.590871    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:40.603649    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:40.603663    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:40.616510    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:40.616522    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:40.628834    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:40.628847    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:40.641556    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:40.641569    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:40.666268    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:40.666280    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:40.681632    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:40.681650    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:40.697586    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:40.697597    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:40.715892    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:40.715902    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:43.231748    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:48.233920    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:48.234056    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:48.245458    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:48.245546    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:48.259917    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:48.260006    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:48.270751    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:48.270833    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:48.281592    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:48.281676    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:48.291846    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:48.291939    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:48.301909    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:48.301982    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:48.312226    4371 logs.go:276] 0 containers: []
	W0923 17:23:48.312238    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:48.312301    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:48.322807    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:48.322827    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:48.322834    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:48.358217    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:48.358227    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:48.375184    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:48.375196    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:48.400712    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:48.400722    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:48.418087    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:48.418097    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:48.457606    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:48.457627    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:48.462838    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:48.462850    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:48.479507    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:48.479518    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:48.495553    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:48.495570    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:48.508730    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:48.508743    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:48.524456    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:48.524467    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:48.538464    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:48.538475    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:48.551629    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:48.551641    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:51.066642    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:56.068951    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:56.069155    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:56.085979    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:56.086095    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:56.099430    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:56.099512    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:56.111196    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:56.111287    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:56.121711    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:56.121794    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:56.132459    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:56.132547    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:56.143163    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:56.143251    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:56.153894    4371 logs.go:276] 0 containers: []
	W0923 17:23:56.153908    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:56.153985    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:56.164638    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:56.164657    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:56.164662    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:56.203621    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:56.203629    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:56.217796    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:56.217811    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:56.229367    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:56.229381    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:56.243713    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:56.243727    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:56.255266    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:56.255282    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:56.266965    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:56.266976    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:56.271690    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:56.271698    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:56.305789    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:56.305801    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:56.319641    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:56.319656    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:56.331164    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:56.331174    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:56.342544    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:56.342553    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:56.365790    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:56.365797    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:58.894112    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:03.896360    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:03.896720    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:03.926445    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:03.926588    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:03.944181    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:03.944282    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:03.957799    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:03.957894    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:03.972915    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:03.973000    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:03.983645    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:03.983733    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:03.994333    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:03.994407    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:04.004786    4371 logs.go:276] 0 containers: []
	W0923 17:24:04.004801    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:04.004865    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:04.015110    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:04.015126    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:04.015130    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:04.027133    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:04.027147    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:04.039064    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:04.039074    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:04.043476    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:04.043485    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:04.057821    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:04.057831    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:04.096902    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:04.096912    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:04.111554    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:04.111565    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:04.123737    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:04.123750    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:04.135613    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:04.135626    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:04.175192    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:04.175201    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:04.186889    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:04.186901    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:04.201177    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:04.201191    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:04.213012    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:04.213024    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:04.230711    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:04.230721    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:04.243304    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:04.243319    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:06.770287    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:11.772961    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:11.773299    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:11.804952    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:11.805114    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:11.823353    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:11.823449    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:11.839510    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:11.839608    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:11.851683    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:11.851769    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:11.862245    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:11.862322    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:11.873501    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:11.873588    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:11.883942    4371 logs.go:276] 0 containers: []
	W0923 17:24:11.883958    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:11.884033    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:11.895196    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:11.895214    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:11.895220    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:11.900343    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:11.900352    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:11.925367    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:11.925379    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:11.949183    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:11.949192    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:11.967143    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:11.967155    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:11.979057    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:11.979082    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:11.990240    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:11.990252    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:12.002345    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:12.002356    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:12.017012    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:12.017027    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:12.029193    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:12.029205    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:12.043717    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:12.043730    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:12.076211    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:12.076225    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:12.114771    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:12.114784    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:12.150788    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:12.150805    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:12.162701    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:12.162709    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:14.686812    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:19.689196    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:19.689663    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:19.718586    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:19.718741    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:19.736895    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:19.737005    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:19.750457    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:19.750557    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:19.762149    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:19.762232    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:19.772612    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:19.772695    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:19.783465    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:19.783550    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:19.794473    4371 logs.go:276] 0 containers: []
	W0923 17:24:19.794485    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:19.794563    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:19.807026    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:19.807043    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:19.807048    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:19.821814    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:19.821825    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:19.839781    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:19.839792    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:19.851596    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:19.851610    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:19.863353    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:19.863365    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:19.881125    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:19.881141    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:19.897102    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:19.897114    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:19.935905    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:19.935916    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:19.970619    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:19.970631    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:19.982701    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:19.982711    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:19.999070    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:19.999085    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:20.024717    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:20.024727    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:20.036359    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:20.036372    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:20.041633    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:20.041646    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:20.067045    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:20.067059    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:22.581484    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:27.583705    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:27.583958    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:27.602387    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:27.602491    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:27.615550    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:27.615633    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:27.626370    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:27.626458    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:27.638334    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:27.638407    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:27.649308    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:27.649384    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:27.660241    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:27.660316    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:27.671202    4371 logs.go:276] 0 containers: []
	W0923 17:24:27.671215    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:27.671276    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:27.682445    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:27.682463    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:27.682470    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:27.721793    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:27.721801    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:27.735422    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:27.735433    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:27.746977    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:27.746988    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:27.758639    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:27.758649    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:27.763281    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:27.763291    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:27.778205    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:27.778215    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:27.795003    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:27.795013    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:27.809902    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:27.809913    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:27.821976    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:27.821987    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:27.833468    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:27.833479    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:27.856977    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:27.856985    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:27.892181    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:27.892197    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:27.909175    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:27.909188    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:27.921277    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:27.921290    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:30.439971    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:35.441936    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:35.442253    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:35.461248    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:35.461370    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:35.475719    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:35.475821    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:35.488094    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:35.488175    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:35.498958    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:35.499047    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:35.509558    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:35.509653    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:35.520190    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:35.520278    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:35.530744    4371 logs.go:276] 0 containers: []
	W0923 17:24:35.530756    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:35.530829    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:35.541402    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:35.541420    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:35.541426    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:35.580760    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:35.580771    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:35.595414    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:35.595424    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:35.607816    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:35.607827    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:35.629111    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:35.629124    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:35.646364    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:35.646374    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:35.657181    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:35.657192    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:35.661736    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:35.661743    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:35.673461    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:35.673472    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:35.696892    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:35.696900    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:35.718047    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:35.718058    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:35.730309    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:35.730324    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:35.765657    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:35.765668    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:35.780840    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:35.780852    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:35.793039    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:35.793054    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:38.307348    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:43.309975    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:43.310153    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:43.323235    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:43.323326    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:43.334559    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:43.334647    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:43.346701    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:43.346792    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:43.358272    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:43.358360    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:43.368298    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:43.368383    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:43.379178    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:43.379253    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:43.389327    4371 logs.go:276] 0 containers: []
	W0923 17:24:43.389339    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:43.389415    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:43.406700    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:43.406715    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:43.406721    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:43.418604    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:43.418613    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:43.442597    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:43.442604    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:43.454065    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:43.454076    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:43.491947    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:43.491963    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:43.504090    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:43.504100    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:43.518987    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:43.518997    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:43.533121    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:43.533132    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:43.574878    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:43.574892    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:43.587303    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:43.587315    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:43.599191    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:43.599203    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:43.618534    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:43.618548    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:43.623269    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:43.623275    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:43.635436    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:43.635450    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:43.646865    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:43.646874    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:46.164179    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:51.165558    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:51.165865    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:51.185178    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:51.185287    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:51.199771    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:51.199873    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:51.212617    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:51.212707    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:51.223622    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:51.223700    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:51.234713    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:51.234801    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:51.245934    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:51.246047    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:51.262196    4371 logs.go:276] 0 containers: []
	W0923 17:24:51.262208    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:51.262285    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:51.273203    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:51.273220    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:51.273225    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:51.288224    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:51.288238    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:51.299196    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:51.299207    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:51.317287    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:51.317297    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:51.328593    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:51.328603    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:51.362811    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:51.362823    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:51.377941    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:51.377952    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:51.382316    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:51.382322    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:51.393687    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:51.393699    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:51.432689    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:51.432698    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:51.447340    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:51.447357    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:51.460494    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:51.460507    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:51.481932    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:51.481946    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:51.493634    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:51.493650    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:51.504973    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:51.504990    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:54.032561    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:59.033169    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:59.033399    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:59.050970    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:59.051074    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:59.063990    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:59.064080    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:59.075853    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:59.075943    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:59.090396    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:59.090474    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:59.102620    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:59.102697    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:59.113396    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:59.113481    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:59.123751    4371 logs.go:276] 0 containers: []
	W0923 17:24:59.123764    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:59.123838    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:59.134447    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:59.134467    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:59.134473    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:59.148087    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:59.148098    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:59.159952    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:59.159964    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:59.196287    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:59.196298    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:59.200804    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:59.200813    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:59.212006    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:59.212017    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:59.226898    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:59.226908    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:59.264758    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:59.264766    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:59.276911    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:59.276923    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:59.298855    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:59.298864    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:59.323597    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:59.323613    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:59.335479    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:59.335494    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:59.349438    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:59.349453    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:59.363423    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:59.363438    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:59.380071    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:59.380089    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:01.896427    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:06.898692    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:06.898919    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:25:06.918921    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:25:06.919013    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:25:06.930969    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:25:06.931051    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:25:06.941869    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:25:06.941964    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:25:06.952929    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:25:06.953012    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:25:06.964007    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:25:06.964094    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:25:06.975187    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:25:06.975264    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:25:06.985805    4371 logs.go:276] 0 containers: []
	W0923 17:25:06.985817    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:25:06.985887    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:25:06.997459    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:25:06.997476    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:25:06.997482    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:25:07.034469    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:25:07.034483    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:25:07.047060    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:25:07.047072    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:25:07.060304    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:25:07.060318    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:25:07.085322    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:25:07.085335    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:25:07.089949    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:25:07.089957    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:25:07.104280    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:25:07.104293    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:25:07.116476    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:25:07.116488    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:25:07.131784    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:25:07.131799    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:25:07.173820    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:25:07.173835    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:25:07.187770    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:25:07.187786    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:25:07.199332    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:25:07.199347    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:07.214527    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:25:07.214540    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:25:07.226517    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:25:07.226528    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:25:07.243490    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:25:07.243505    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:25:09.761466    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:14.763794    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:14.764064    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:25:14.784284    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:25:14.784397    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:25:14.798333    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:25:14.798426    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:25:14.810129    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:25:14.810205    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:25:14.820895    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:25:14.820977    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:25:14.831196    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:25:14.831286    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:25:14.841937    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:25:14.842020    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:25:14.852387    4371 logs.go:276] 0 containers: []
	W0923 17:25:14.852399    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:25:14.852466    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:25:14.862635    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:25:14.862652    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:25:14.862658    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:25:14.874946    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:25:14.874959    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:25:14.886874    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:25:14.886884    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:25:14.891990    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:25:14.891997    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:25:14.927097    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:25:14.927107    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:25:14.942327    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:25:14.942337    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:25:14.960711    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:25:14.960723    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:25:14.975334    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:25:14.975347    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:25:14.987639    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:25:14.987650    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:25:15.011300    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:25:15.011309    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:25:15.022949    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:25:15.022962    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:25:15.060877    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:25:15.060886    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:15.075181    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:25:15.075196    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:25:15.086982    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:25:15.087000    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:25:15.098689    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:25:15.098705    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:25:17.621812    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:22.624019    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:22.624157    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:25:22.635864    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:25:22.635957    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:25:22.646399    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:25:22.646493    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:25:22.657245    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:25:22.657327    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:25:22.668508    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:25:22.668596    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:25:22.682216    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:25:22.682292    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:25:22.693121    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:25:22.693204    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:25:22.705680    4371 logs.go:276] 0 containers: []
	W0923 17:25:22.705692    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:25:22.705762    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:25:22.715876    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:25:22.715895    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:25:22.715900    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:25:22.720538    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:25:22.720549    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:25:22.732762    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:25:22.732776    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:25:22.747187    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:25:22.747198    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:25:22.764958    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:25:22.764967    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:25:22.778360    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:25:22.778373    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:25:22.819157    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:25:22.819169    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:22.833668    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:25:22.833677    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:25:22.847894    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:25:22.847904    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:25:22.860359    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:25:22.860370    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:25:22.872208    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:25:22.872220    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:25:22.906900    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:25:22.906913    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:25:22.920928    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:25:22.920942    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:25:22.935118    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:25:22.935129    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:25:22.946896    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:25:22.946908    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:25:25.473508    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:30.473897    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:30.474148    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:25:30.496488    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:25:30.496605    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:25:30.511866    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:25:30.511964    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:25:30.525079    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:25:30.525175    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:25:30.535763    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:25:30.535841    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:25:30.550492    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:25:30.550572    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:25:30.561089    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:25:30.561172    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:25:30.571748    4371 logs.go:276] 0 containers: []
	W0923 17:25:30.571761    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:25:30.571837    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:25:30.582367    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:25:30.582386    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:25:30.582393    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:25:30.594551    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:25:30.594563    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:25:30.599436    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:25:30.599442    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:25:30.612244    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:25:30.612255    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:25:30.623948    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:25:30.623963    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:25:30.647307    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:25:30.647318    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:25:30.662445    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:25:30.662455    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:25:30.676108    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:25:30.676118    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:25:30.694396    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:25:30.694407    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:25:30.734186    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:25:30.734196    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:30.748644    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:25:30.748654    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:25:30.760078    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:25:30.760089    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:25:30.771956    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:25:30.771967    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:25:30.806620    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:25:30.806633    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:25:30.820680    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:25:30.820694    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:25:33.334968    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:38.337230    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:38.337490    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:25:38.367066    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:25:38.367200    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:25:38.382974    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:25:38.383075    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:25:38.396315    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:25:38.396407    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:25:38.407572    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:25:38.407654    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:25:38.418043    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:25:38.418129    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:25:38.428196    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:25:38.428287    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:25:38.438381    4371 logs.go:276] 0 containers: []
	W0923 17:25:38.438394    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:25:38.438475    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:25:38.449082    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:25:38.449100    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:25:38.449106    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:25:38.460942    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:25:38.460956    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:25:38.481860    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:25:38.481872    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:25:38.517025    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:25:38.517039    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:38.531633    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:25:38.531647    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:25:38.546862    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:25:38.546874    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:25:38.558018    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:25:38.558033    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:25:38.580798    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:25:38.580806    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:25:38.619909    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:25:38.619918    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:25:38.624891    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:25:38.624900    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:25:38.636609    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:25:38.636622    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:25:38.650205    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:25:38.650218    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:25:38.661770    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:25:38.661781    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:25:38.678160    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:25:38.678172    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:25:38.689550    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:25:38.689559    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:25:41.203083    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:46.205452    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:46.210182    4371 out.go:201] 
	W0923 17:25:46.213125    4371 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0923 17:25:46.213143    4371 out.go:270] * 
	* 
	W0923 17:25:46.214646    4371 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:25:46.224089    4371 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-903000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-23 17:25:46.339405 -0700 PDT m=+2940.308779376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-903000 -n running-upgrade-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-903000 -n running-upgrade-903000: exit status 2 (15.642968125s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-903000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-263000          | force-systemd-flag-263000 | jenkins | v1.34.0 | 23 Sep 24 17:15 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-831000              | force-systemd-env-831000  | jenkins | v1.34.0 | 23 Sep 24 17:15 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-831000           | force-systemd-env-831000  | jenkins | v1.34.0 | 23 Sep 24 17:15 PDT | 23 Sep 24 17:15 PDT |
	| start   | -p docker-flags-241000                | docker-flags-241000       | jenkins | v1.34.0 | 23 Sep 24 17:15 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-263000             | force-systemd-flag-263000 | jenkins | v1.34.0 | 23 Sep 24 17:16 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-263000          | force-systemd-flag-263000 | jenkins | v1.34.0 | 23 Sep 24 17:16 PDT | 23 Sep 24 17:16 PDT |
	| start   | -p cert-expiration-029000             | cert-expiration-029000    | jenkins | v1.34.0 | 23 Sep 24 17:16 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-241000 ssh               | docker-flags-241000       | jenkins | v1.34.0 | 23 Sep 24 17:16 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-241000 ssh               | docker-flags-241000       | jenkins | v1.34.0 | 23 Sep 24 17:16 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-241000                | docker-flags-241000       | jenkins | v1.34.0 | 23 Sep 24 17:16 PDT | 23 Sep 24 17:16 PDT |
	| start   | -p cert-options-849000                | cert-options-849000       | jenkins | v1.34.0 | 23 Sep 24 17:16 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-849000 ssh               | cert-options-849000       | jenkins | v1.34.0 | 23 Sep 24 17:16 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-849000 -- sudo        | cert-options-849000       | jenkins | v1.34.0 | 23 Sep 24 17:16 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-849000                | cert-options-849000       | jenkins | v1.34.0 | 23 Sep 24 17:16 PDT | 23 Sep 24 17:16 PDT |
	| start   | -p running-upgrade-903000             | minikube                  | jenkins | v1.26.0 | 23 Sep 24 17:16 PDT | 23 Sep 24 17:17 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-903000             | running-upgrade-903000    | jenkins | v1.34.0 | 23 Sep 24 17:17 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-029000             | cert-expiration-029000    | jenkins | v1.34.0 | 23 Sep 24 17:19 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-029000             | cert-expiration-029000    | jenkins | v1.34.0 | 23 Sep 24 17:19 PDT | 23 Sep 24 17:19 PDT |
	| start   | -p kubernetes-upgrade-953000          | kubernetes-upgrade-953000 | jenkins | v1.34.0 | 23 Sep 24 17:19 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-953000          | kubernetes-upgrade-953000 | jenkins | v1.34.0 | 23 Sep 24 17:19 PDT | 23 Sep 24 17:19 PDT |
	| start   | -p kubernetes-upgrade-953000          | kubernetes-upgrade-953000 | jenkins | v1.34.0 | 23 Sep 24 17:19 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-953000          | kubernetes-upgrade-953000 | jenkins | v1.34.0 | 23 Sep 24 17:19 PDT | 23 Sep 24 17:19 PDT |
	| start   | -p stopped-upgrade-180000             | minikube                  | jenkins | v1.26.0 | 23 Sep 24 17:19 PDT | 23 Sep 24 17:20 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-180000 stop           | minikube                  | jenkins | v1.26.0 | 23 Sep 24 17:20 PDT | 23 Sep 24 17:20 PDT |
	| start   | -p stopped-upgrade-180000             | stopped-upgrade-180000    | jenkins | v1.34.0 | 23 Sep 24 17:20 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 17:20:26
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 17:20:26.937184    4508 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:20:26.937326    4508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:20:26.937330    4508 out.go:358] Setting ErrFile to fd 2...
	I0923 17:20:26.937332    4508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:20:26.937495    4508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:20:26.938705    4508 out.go:352] Setting JSON to false
	I0923 17:20:26.958502    4508 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2989,"bootTime":1727134237,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:20:26.958581    4508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:20:26.963511    4508 out.go:177] * [stopped-upgrade-180000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:20:26.971430    4508 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:20:26.971514    4508 notify.go:220] Checking for updates...
	I0923 17:20:26.977439    4508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:20:26.980492    4508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:20:26.981612    4508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:20:26.984458    4508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:20:26.987446    4508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:20:26.990729    4508 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:20:26.993389    4508 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 17:20:26.996418    4508 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:20:27.000394    4508 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:20:27.007411    4508 start.go:297] selected driver: qemu2
	I0923 17:20:27.007417    4508 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 17:20:27.007460    4508 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:20:27.010002    4508 cni.go:84] Creating CNI manager for ""
	I0923 17:20:27.010039    4508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:20:27.010059    4508 start.go:340] cluster config:
	{Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 17:20:27.010109    4508 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:20:27.017444    4508 out.go:177] * Starting "stopped-upgrade-180000" primary control-plane node in "stopped-upgrade-180000" cluster
	I0923 17:20:27.021466    4508 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 17:20:27.021480    4508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0923 17:20:27.021488    4508 cache.go:56] Caching tarball of preloaded images
	I0923 17:20:27.021534    4508 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:20:27.021540    4508 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0923 17:20:27.021588    4508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/config.json ...
	I0923 17:20:27.021962    4508 start.go:360] acquireMachinesLock for stopped-upgrade-180000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:20:27.021990    4508 start.go:364] duration metric: took 22.709µs to acquireMachinesLock for "stopped-upgrade-180000"
	I0923 17:20:27.022000    4508 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:20:27.022004    4508 fix.go:54] fixHost starting: 
	I0923 17:20:27.022114    4508 fix.go:112] recreateIfNeeded on stopped-upgrade-180000: state=Stopped err=<nil>
	W0923 17:20:27.022123    4508 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:20:27.030437    4508 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-180000" ...
	I0923 17:20:26.600550    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:26.601220    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:26.643199    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:26.643367    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:26.665105    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:26.665234    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:26.683150    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:26.683250    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:26.695307    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:26.695401    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:26.706177    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:26.706269    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:26.716935    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:26.717019    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:26.727022    4371 logs.go:276] 0 containers: []
	W0923 17:20:26.727034    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:26.727109    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:26.737512    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:26.737530    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:26.737536    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:26.749170    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:26.749179    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:26.760734    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:26.760745    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:26.785609    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:26.785618    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:26.805395    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:26.805406    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:26.826010    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:26.826021    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:26.845105    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:26.845119    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:26.880393    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:26.880437    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:26.891736    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:26.891747    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:20:26.907292    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:26.907304    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:26.919293    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:26.919305    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:26.932132    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:26.932147    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:26.944957    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:26.944971    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:26.950534    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:26.950545    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:26.962313    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:26.962323    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:26.973860    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:26.973870    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:26.985101    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:26.985110    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:27.033309    4508 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:20:27.033380    4508 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50494-:22,hostfwd=tcp::50495-:2376,hostname=stopped-upgrade-180000 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/disk.qcow2
	I0923 17:20:27.077996    4508 main.go:141] libmachine: STDOUT: 
	I0923 17:20:27.078022    4508 main.go:141] libmachine: STDERR: 
	I0923 17:20:27.078028    4508 main.go:141] libmachine: Waiting for VM to start (ssh -p 50494 docker@127.0.0.1)...
	I0923 17:20:29.528120    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:34.530402    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:34.530606    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:34.542813    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:34.542914    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:34.554184    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:34.554281    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:34.569304    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:34.569378    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:34.579398    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:34.579483    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:34.595471    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:34.595545    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:34.606248    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:34.606327    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:34.616220    4371 logs.go:276] 0 containers: []
	W0923 17:20:34.616231    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:34.616300    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:34.626559    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:34.626578    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:34.626584    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:34.631309    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:34.631316    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:34.667523    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:34.667533    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:34.679761    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:34.679771    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:34.691597    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:34.691608    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:34.703269    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:34.703282    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:34.730104    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:34.730114    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:34.753404    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:34.753421    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:34.767783    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:34.767795    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:34.779370    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:34.779387    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:34.797436    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:34.797452    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:20:34.812035    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:34.812045    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:34.828168    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:34.828177    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:34.852281    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:34.852289    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:34.893667    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:34.893676    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:34.906349    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:34.906360    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:34.917912    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:34.917924    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:37.439866    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:42.442112    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:42.442407    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:42.463241    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:42.463389    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:42.478737    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:42.478839    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:42.490733    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:42.490822    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:42.501553    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:42.501630    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:42.516430    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:42.516510    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:42.526374    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:42.526456    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:42.537229    4371 logs.go:276] 0 containers: []
	W0923 17:20:42.537243    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:42.537301    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:42.552115    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:42.552130    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:42.552135    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:42.562954    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:42.562963    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:42.587841    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:42.587848    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:42.629283    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:42.629291    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:42.665420    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:42.665434    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:42.683813    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:42.683827    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:42.696141    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:42.696151    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:42.710427    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:42.710439    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:42.721603    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:42.721617    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:42.738832    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:42.738846    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:42.753342    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:42.753353    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:42.765629    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:42.765640    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:42.770535    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:42.770545    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:20:42.784895    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:42.784905    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:42.796092    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:42.796104    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:42.806948    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:42.806962    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:42.818375    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:42.818390    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:45.331756    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:47.137261    4508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/config.json ...
	I0923 17:20:47.138461    4508 machine.go:93] provisionDockerMachine start ...
	I0923 17:20:47.138610    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.139057    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.139074    4508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 17:20:47.228457    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 17:20:47.228492    4508 buildroot.go:166] provisioning hostname "stopped-upgrade-180000"
	I0923 17:20:47.228624    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.228871    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.228883    4508 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-180000 && echo "stopped-upgrade-180000" | sudo tee /etc/hostname
	I0923 17:20:47.310966    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-180000
	
	I0923 17:20:47.311062    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.311233    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.311246    4508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-180000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-180000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-180000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 17:20:47.385402    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 17:20:47.385415    4508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19696-1109/.minikube CaCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19696-1109/.minikube}
	I0923 17:20:47.385423    4508 buildroot.go:174] setting up certificates
	I0923 17:20:47.385429    4508 provision.go:84] configureAuth start
	I0923 17:20:47.385433    4508 provision.go:143] copyHostCerts
	I0923 17:20:47.385521    4508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.pem, removing ...
	I0923 17:20:47.385530    4508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.pem
	I0923 17:20:47.385771    4508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.pem (1082 bytes)
	I0923 17:20:47.385988    4508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19696-1109/.minikube/cert.pem, removing ...
	I0923 17:20:47.385993    4508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19696-1109/.minikube/cert.pem
	I0923 17:20:47.386064    4508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/cert.pem (1123 bytes)
	I0923 17:20:47.386196    4508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19696-1109/.minikube/key.pem, removing ...
	I0923 17:20:47.386200    4508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19696-1109/.minikube/key.pem
	I0923 17:20:47.386274    4508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/key.pem (1679 bytes)
	I0923 17:20:47.386381    4508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-180000 san=[127.0.0.1 localhost minikube stopped-upgrade-180000]
	I0923 17:20:47.480886    4508 provision.go:177] copyRemoteCerts
	I0923 17:20:47.480936    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 17:20:47.480944    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0923 17:20:47.516374    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 17:20:47.523265    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0923 17:20:47.529936    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 17:20:47.537081    4508 provision.go:87] duration metric: took 151.642667ms to configureAuth
	I0923 17:20:47.537090    4508 buildroot.go:189] setting minikube options for container-runtime
	I0923 17:20:47.537189    4508 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:20:47.537233    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.537316    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.537323    4508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 17:20:47.603984    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 17:20:47.603995    4508 buildroot.go:70] root file system type: tmpfs
	I0923 17:20:47.604057    4508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 17:20:47.604121    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.604236    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.604271    4508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 17:20:47.675030    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 17:20:47.675100    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.675224    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.675232    4508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 17:20:48.052414    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 17:20:48.052428    4508 machine.go:96] duration metric: took 913.956333ms to provisionDockerMachine
	I0923 17:20:48.052434    4508 start.go:293] postStartSetup for "stopped-upgrade-180000" (driver="qemu2")
	I0923 17:20:48.052441    4508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 17:20:48.052505    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 17:20:48.052514    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0923 17:20:48.088607    4508 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 17:20:48.089797    4508 info.go:137] Remote host: Buildroot 2021.02.12
	I0923 17:20:48.089805    4508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19696-1109/.minikube/addons for local assets ...
	I0923 17:20:48.089892    4508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19696-1109/.minikube/files for local assets ...
	I0923 17:20:48.090019    4508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem -> 15962.pem in /etc/ssl/certs
	I0923 17:20:48.090160    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 17:20:48.092955    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem --> /etc/ssl/certs/15962.pem (1708 bytes)
	I0923 17:20:48.100034    4508 start.go:296] duration metric: took 47.595417ms for postStartSetup
	I0923 17:20:48.100049    4508 fix.go:56] duration metric: took 21.078193666s for fixHost
	I0923 17:20:48.100085    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:48.100186    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:48.100191    4508 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 17:20:48.166813    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727137248.321797546
	
	I0923 17:20:48.166824    4508 fix.go:216] guest clock: 1727137248.321797546
	I0923 17:20:48.166828    4508 fix.go:229] Guest: 2024-09-23 17:20:48.321797546 -0700 PDT Remote: 2024-09-23 17:20:48.100051 -0700 PDT m=+21.184851918 (delta=221.746546ms)
	I0923 17:20:48.166841    4508 fix.go:200] guest clock delta is within tolerance: 221.746546ms
	I0923 17:20:48.166844    4508 start.go:83] releasing machines lock for "stopped-upgrade-180000", held for 21.144998041s
	I0923 17:20:48.166918    4508 ssh_runner.go:195] Run: cat /version.json
	I0923 17:20:48.166928    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0923 17:20:48.166918    4508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 17:20:48.166955    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	W0923 17:20:48.167530    4508 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50494: connect: connection refused
	I0923 17:20:48.167555    4508 retry.go:31] will retry after 373.081313ms: dial tcp [::1]:50494: connect: connection refused
	W0923 17:20:48.201794    4508 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0923 17:20:48.201845    4508 ssh_runner.go:195] Run: systemctl --version
	I0923 17:20:48.203885    4508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 17:20:48.205532    4508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 17:20:48.205561    4508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0923 17:20:48.208830    4508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0923 17:20:48.213653    4508 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 17:20:48.213667    4508 start.go:495] detecting cgroup driver to use...
	I0923 17:20:48.213756    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 17:20:48.220887    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0923 17:20:48.224111    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 17:20:48.227074    4508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 17:20:48.227105    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 17:20:48.229876    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 17:20:48.232953    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 17:20:48.236389    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 17:20:48.239333    4508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 17:20:48.242341    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 17:20:48.245525    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 17:20:48.248910    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 17:20:48.252027    4508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 17:20:48.254575    4508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 17:20:48.257420    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:48.335540    4508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 17:20:48.346128    4508 start.go:495] detecting cgroup driver to use...
	I0923 17:20:48.346225    4508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 17:20:48.352309    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 17:20:48.357310    4508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 17:20:48.364870    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 17:20:48.369690    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 17:20:48.374385    4508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 17:20:48.424891    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 17:20:48.430387    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 17:20:48.436413    4508 ssh_runner.go:195] Run: which cri-dockerd
	I0923 17:20:48.437688    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 17:20:48.440478    4508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0923 17:20:48.445618    4508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 17:20:48.527396    4508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 17:20:48.607210    4508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 17:20:48.607264    4508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 17:20:48.612237    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:48.681883    4508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 17:20:49.797614    4508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.115721292s)
	I0923 17:20:49.797680    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 17:20:49.802246    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 17:20:49.807085    4508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 17:20:49.875446    4508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 17:20:49.957519    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:50.027199    4508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 17:20:50.033160    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 17:20:50.037793    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:50.115351    4508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 17:20:50.153860    4508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 17:20:50.153968    4508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 17:20:50.156087    4508 start.go:563] Will wait 60s for crictl version
	I0923 17:20:50.156144    4508 ssh_runner.go:195] Run: which crictl
	I0923 17:20:50.157487    4508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 17:20:50.172490    4508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0923 17:20:50.172583    4508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 17:20:50.188396    4508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 17:20:50.209615    4508 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0923 17:20:50.209701    4508 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0923 17:20:50.211088    4508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 17:20:50.214566    4508 kubeadm.go:883] updating cluster {Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0923 17:20:50.214616    4508 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 17:20:50.214667    4508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 17:20:50.224870    4508 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 17:20:50.224879    4508 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 17:20:50.224928    4508 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 17:20:50.228668    4508 ssh_runner.go:195] Run: which lz4
	I0923 17:20:50.229975    4508 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 17:20:50.231370    4508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 17:20:50.231378    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0923 17:20:51.209091    4508 docker.go:649] duration metric: took 979.165708ms to copy over tarball
	I0923 17:20:51.209168    4508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 17:20:50.332637    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:50.332754    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:50.344823    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:50.344916    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:50.356176    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:50.356266    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:50.368331    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:50.368420    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:50.383220    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:50.383309    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:50.397190    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:50.397280    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:50.409477    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:50.409576    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:50.421771    4371 logs.go:276] 0 containers: []
	W0923 17:20:50.421784    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:50.421858    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:50.433379    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:50.433396    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:50.433401    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:50.450751    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:50.450763    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:50.464057    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:50.464070    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:50.476466    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:50.476478    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:50.490248    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:50.490260    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:50.505831    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:50.505844    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:50.520047    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:50.520063    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:50.532788    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:50.532825    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:50.548615    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:50.548627    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:50.592420    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:50.592434    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:50.597773    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:50.597786    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:50.636725    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:50.636739    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:20:50.652201    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:50.652215    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:50.671603    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:50.671615    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:50.697252    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:50.697279    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:50.710647    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:50.710659    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:50.728897    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:50.728914    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:52.373462    4508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.164286791s)
	I0923 17:20:52.373476    4508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 17:20:52.389548    4508 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 17:20:52.392983    4508 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0923 17:20:52.398104    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:52.480651    4508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 17:20:54.105747    4508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.625087916s)
	I0923 17:20:54.105858    4508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 17:20:54.125262    4508 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 17:20:54.125272    4508 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 17:20:54.125277    4508 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 17:20:54.130599    4508 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:20:54.132655    4508 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:20:54.134140    4508 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:20:54.134202    4508 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 17:20:54.136274    4508 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:20:54.136320    4508 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 17:20:54.137867    4508 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:20:54.138311    4508 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 17:20:54.139359    4508 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 17:20:54.139886    4508 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:20:54.141320    4508 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:20:54.141497    4508 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:20:54.142948    4508 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:20:54.143090    4508 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:20:54.144136    4508 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:20:54.144821    4508 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:20:54.470043    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:20:54.480186    4508 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0923 17:20:54.480218    4508 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:20:54.480283    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:20:54.490378    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0923 17:20:54.500931    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0923 17:20:54.510816    4508 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0923 17:20:54.510846    4508 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0923 17:20:54.510914    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0923 17:20:54.520967    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0923 17:20:54.522471    4508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0923 17:20:54.524086    4508 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0923 17:20:54.524098    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0923 17:20:54.532376    4508 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0923 17:20:54.532384    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0923 17:20:54.547515    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0923 17:20:54.562422    4508 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0923 17:20:54.562579    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:20:54.563329    4508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0923 17:20:54.563365    4508 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0923 17:20:54.563381    4508 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0923 17:20:54.563417    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0923 17:20:54.576632    4508 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0923 17:20:54.576651    4508 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:20:54.576698    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0923 17:20:54.576718    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:20:54.576803    4508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0923 17:20:54.582788    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:20:54.591833    4508 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0923 17:20:54.591841    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 17:20:54.591868    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0923 17:20:54.592007    4508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0923 17:20:54.600393    4508 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0923 17:20:54.600422    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0923 17:20:54.600541    4508 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0923 17:20:54.600560    4508 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:20:54.600612    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:20:54.629315    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0923 17:20:54.640896    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:20:54.641862    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:20:54.675884    4508 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0923 17:20:54.675909    4508 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:20:54.675982    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:20:54.691590    4508 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0923 17:20:54.691604    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0923 17:20:54.707439    4508 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0923 17:20:54.707465    4508 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:20:54.707541    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:20:54.738037    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0923 17:20:54.838198    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0923 17:20:54.838208    4508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0923 17:20:54.917345    4508 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0923 17:20:54.917361    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0923 17:20:54.993572    4508 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0923 17:20:54.993717    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:20:55.089961    4508 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0923 17:20:55.089989    4508 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:20:55.090043    4508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0923 17:20:55.090064    4508 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:20:55.103955    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0923 17:20:55.104097    4508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0923 17:20:55.105447    4508 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0923 17:20:55.105458    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0923 17:20:55.133466    4508 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0923 17:20:55.133480    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0923 17:20:55.377767    4508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0923 17:20:55.377811    4508 cache_images.go:92] duration metric: took 1.252525833s to LoadCachedImages
	W0923 17:20:55.377852    4508 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0923 17:20:55.377857    4508 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0923 17:20:55.377898    4508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-180000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 17:20:55.377973    4508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 17:20:55.391992    4508 cni.go:84] Creating CNI manager for ""
	I0923 17:20:55.392003    4508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:20:55.392008    4508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 17:20:55.392017    4508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-180000 NodeName:stopped-upgrade-180000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 17:20:55.392084    4508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-180000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 17:20:55.392150    4508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0923 17:20:55.395014    4508 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 17:20:55.395050    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 17:20:55.397568    4508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0923 17:20:55.402645    4508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 17:20:55.407348    4508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0923 17:20:55.412446    4508 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0923 17:20:55.413560    4508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 17:20:55.417113    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:55.499150    4508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 17:20:55.505213    4508 certs.go:68] Setting up /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000 for IP: 10.0.2.15
	I0923 17:20:55.505222    4508 certs.go:194] generating shared ca certs ...
	I0923 17:20:55.505231    4508 certs.go:226] acquiring lock for ca certs: {Name:mk0bd8a887d4e289277fd6cf7c9ed1b474966431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:20:55.505405    4508 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.key
	I0923 17:20:55.505464    4508 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.key
	I0923 17:20:55.505470    4508 certs.go:256] generating profile certs ...
	I0923 17:20:55.505546    4508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/client.key
	I0923 17:20:55.505562    4508 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156
	I0923 17:20:55.505573    4508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0923 17:20:55.625317    4508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 ...
	I0923 17:20:55.625331    4508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156: {Name:mk018920694709d8ee675a242cd091f45c8350f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:20:55.633285    4508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156 ...
	I0923 17:20:55.633290    4508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156: {Name:mk85fedbb527994c11d5c54319fe082e5f6febf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:20:55.633449    4508 certs.go:381] copying /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt
	I0923 17:20:55.634860    4508 certs.go:385] copying /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key
	I0923 17:20:55.635052    4508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/proxy-client.key
	I0923 17:20:55.635191    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/1596.pem (1338 bytes)
	W0923 17:20:55.635223    4508 certs.go:480] ignoring /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/1596_empty.pem, impossibly tiny 0 bytes
	I0923 17:20:55.635230    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 17:20:55.635253    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem (1082 bytes)
	I0923 17:20:55.635275    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem (1123 bytes)
	I0923 17:20:55.635294    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem (1679 bytes)
	I0923 17:20:55.635332    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem (1708 bytes)
	I0923 17:20:55.635685    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 17:20:55.642518    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 17:20:55.649461    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 17:20:55.656980    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 17:20:55.664459    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 17:20:55.671622    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 17:20:55.678460    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 17:20:55.685406    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 17:20:55.692885    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 17:20:55.699664    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/1596.pem --> /usr/share/ca-certificates/1596.pem (1338 bytes)
	I0923 17:20:55.706360    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem --> /usr/share/ca-certificates/15962.pem (1708 bytes)
	I0923 17:20:55.713195    4508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 17:20:55.718378    4508 ssh_runner.go:195] Run: openssl version
	I0923 17:20:55.720205    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1596.pem && ln -fs /usr/share/ca-certificates/1596.pem /etc/ssl/certs/1596.pem"
	I0923 17:20:55.723190    4508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1596.pem
	I0923 17:20:55.724638    4508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:53 /usr/share/ca-certificates/1596.pem
	I0923 17:20:55.724661    4508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1596.pem
	I0923 17:20:55.726454    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1596.pem /etc/ssl/certs/51391683.0"
	I0923 17:20:55.729805    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15962.pem && ln -fs /usr/share/ca-certificates/15962.pem /etc/ssl/certs/15962.pem"
	I0923 17:20:55.733138    4508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15962.pem
	I0923 17:20:55.734659    4508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:53 /usr/share/ca-certificates/15962.pem
	I0923 17:20:55.734687    4508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15962.pem
	I0923 17:20:55.736369    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15962.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 17:20:55.739115    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 17:20:55.741907    4508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 17:20:55.743286    4508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0923 17:20:55.743312    4508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 17:20:55.745067    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 17:20:55.748217    4508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 17:20:55.749635    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 17:20:55.751559    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 17:20:55.753393    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 17:20:55.755398    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 17:20:55.757171    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 17:20:55.759214    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 17:20:55.761105    4508 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 17:20:55.761186    4508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 17:20:55.771717    4508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 17:20:55.774671    4508 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 17:20:55.774683    4508 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 17:20:55.774710    4508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 17:20:55.777507    4508 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 17:20:55.777825    4508 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-180000" does not appear in /Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:20:55.777920    4508 kubeconfig.go:62] /Users/jenkins/minikube-integration/19696-1109/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-180000" cluster setting kubeconfig missing "stopped-upgrade-180000" context setting]
	I0923 17:20:55.778131    4508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/kubeconfig: {Name:mk52c76fc8ff944a7bcab52e821c0354dabfa3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:20:55.778834    4508 kapi.go:59] client config for stopped-upgrade-180000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/client.key", CAFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10287a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 17:20:55.779188    4508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 17:20:55.781855    4508 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-180000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0923 17:20:55.781861    4508 kubeadm.go:1160] stopping kube-system containers ...
	I0923 17:20:55.781909    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 17:20:55.792510    4508 docker.go:483] Stopping containers: [d197e6aae6df d90f22288f74 f23fdf4a3c0e d3412f726c41 bef04daa8846 c5580dec55db c76c65ec3945 888ebeffd7fc]
	I0923 17:20:55.792591    4508 ssh_runner.go:195] Run: docker stop d197e6aae6df d90f22288f74 f23fdf4a3c0e d3412f726c41 bef04daa8846 c5580dec55db c76c65ec3945 888ebeffd7fc
	I0923 17:20:55.803290    4508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 17:20:55.808668    4508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 17:20:55.811607    4508 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 17:20:55.811613    4508 kubeadm.go:157] found existing configuration files:
	
	I0923 17:20:55.811638    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf
	I0923 17:20:55.814062    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 17:20:55.814089    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 17:20:55.817055    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf
	I0923 17:20:55.820076    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 17:20:55.820101    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 17:20:55.822588    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf
	I0923 17:20:55.825211    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 17:20:55.825239    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 17:20:55.828338    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf
	I0923 17:20:55.831074    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 17:20:55.831115    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 17:20:55.833815    4508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 17:20:55.837320    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:20:55.859490    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:20:56.344915    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:20:56.481378    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:20:56.503403    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:20:56.526066    4508 api_server.go:52] waiting for apiserver process to appear ...
	I0923 17:20:56.526144    4508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:20:53.255713    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:57.028292    4508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:20:57.528242    4508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:20:57.536398    4508 api_server.go:72] duration metric: took 1.010333083s to wait for apiserver process to appear ...
	I0923 17:20:57.536412    4508 api_server.go:88] waiting for apiserver healthz status ...
	I0923 17:20:57.536422    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:20:58.258335    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:20:58.258454    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:20:58.271337    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:20:58.271421    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:20:58.282397    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:20:58.282473    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:20:58.294010    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:20:58.294083    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:20:58.305333    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:20:58.305418    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:20:58.316182    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:20:58.316264    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:20:58.327339    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:20:58.327417    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:20:58.338383    4371 logs.go:276] 0 containers: []
	W0923 17:20:58.338398    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:20:58.338470    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:20:58.349142    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:20:58.349160    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:20:58.349165    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:20:58.361517    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:20:58.361533    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:20:58.372661    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:20:58.372673    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:20:58.383805    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:20:58.383818    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:20:58.423395    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:20:58.423406    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:20:58.427592    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:20:58.427599    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:20:58.439612    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:20:58.439622    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:20:58.450822    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:20:58.450833    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:20:58.485425    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:20:58.485437    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:20:58.496807    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:20:58.496822    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:20:58.508886    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:20:58.508896    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:20:58.526398    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:20:58.526414    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:20:58.549942    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:20:58.549950    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:20:58.561883    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:20:58.561896    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:20:58.573402    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:20:58.573412    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:20:58.585109    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:20:58.585119    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:20:58.602738    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:20:58.602748    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:21:01.118714    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:02.538504    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:02.538530    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:06.120238    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:06.120524    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:21:06.142218    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:21:06.142360    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:21:06.157247    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:21:06.157333    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:21:06.169732    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:21:06.169819    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:21:06.180524    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:21:06.180607    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:21:06.191102    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:21:06.191189    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:21:06.201887    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:21:06.201974    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:21:06.212169    4371 logs.go:276] 0 containers: []
	W0923 17:21:06.212182    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:21:06.212258    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:21:06.222981    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:21:06.223001    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:21:06.223007    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:21:06.236329    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:21:06.236339    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:21:06.248182    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:21:06.248192    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:21:06.269170    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:21:06.269182    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:21:06.281299    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:21:06.281311    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:21:06.321711    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:21:06.321719    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:21:06.357003    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:21:06.357016    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:21:06.373389    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:21:06.373402    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:21:06.378253    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:21:06.378261    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:21:06.392420    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:21:06.392434    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:21:06.404032    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:21:06.404045    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:21:06.415712    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:21:06.415722    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:21:06.427228    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:21:06.427238    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:21:06.438767    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:21:06.438777    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:21:06.463026    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:21:06.463034    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:21:06.477028    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:21:06.477037    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:21:06.488328    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:21:06.488337    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:21:07.538760    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:07.538824    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:09.001739    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:12.539264    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:12.539289    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:14.003374    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:14.003537    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:21:14.014480    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:21:14.014571    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:21:14.025556    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:21:14.025645    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:21:14.036449    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:21:14.036532    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:21:14.046821    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:21:14.046906    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:21:14.057280    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:21:14.057369    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:21:14.067951    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:21:14.068035    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:21:14.078303    4371 logs.go:276] 0 containers: []
	W0923 17:21:14.078314    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:21:14.078386    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:21:14.089523    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:21:14.089540    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:21:14.089545    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:21:14.102062    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:21:14.102079    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:21:14.115942    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:21:14.115954    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:21:14.136201    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:21:14.136215    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:21:14.147865    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:21:14.147878    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:21:14.159129    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:21:14.159142    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:21:14.173232    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:21:14.173250    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:21:14.186313    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:21:14.186324    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:21:14.204010    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:21:14.204020    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:21:14.245876    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:21:14.245887    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:21:14.257603    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:21:14.257615    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:21:14.282669    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:21:14.282679    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:21:14.294270    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:21:14.294282    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:21:14.305430    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:21:14.305442    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:21:14.317143    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:21:14.317159    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:21:14.322137    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:21:14.322145    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:21:14.361885    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:21:14.361899    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:21:16.884490    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:17.539754    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:17.539800    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:21.886624    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:21.886939    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:21:21.914399    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:21:21.914540    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:21:21.935671    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:21:21.935770    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:21:21.948626    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:21:21.948725    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:21:21.961930    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:21:21.962017    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:21:21.972592    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:21:21.972672    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:21:21.982756    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:21:21.982851    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:21:21.993770    4371 logs.go:276] 0 containers: []
	W0923 17:21:21.993782    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:21:21.993857    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:21:22.004480    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:21:22.004499    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:21:22.004506    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:21:22.008898    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:21:22.008904    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:21:22.026276    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:21:22.026286    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:21:22.037840    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:21:22.037850    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:21:22.050180    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:21:22.050191    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:21:22.061822    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:21:22.061835    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:21:22.073482    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:21:22.073493    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:21:22.097126    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:21:22.097133    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:21:22.138953    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:21:22.138966    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:21:22.150389    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:21:22.150404    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:21:22.191089    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:21:22.191098    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:21:22.207758    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:21:22.207772    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:21:22.540453    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:22.540480    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:22.219071    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:21:22.219086    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:21:22.231786    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:21:22.231800    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:21:22.246652    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:21:22.246663    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:21:22.258795    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:21:22.258812    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:21:22.277210    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:21:22.277225    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:21:24.788683    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:27.541538    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:27.541593    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:29.791025    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:29.791256    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:21:29.810248    4371 logs.go:276] 2 containers: [fd00d1544c98 3b316c561070]
	I0923 17:21:29.810365    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:21:29.824689    4371 logs.go:276] 2 containers: [49886fb2966e a84de2b73e49]
	I0923 17:21:29.824779    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:21:29.835684    4371 logs.go:276] 1 containers: [ad09aaa4e9bb]
	I0923 17:21:29.835770    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:21:29.850066    4371 logs.go:276] 2 containers: [6b777bf4c964 8b9a027a5b5d]
	I0923 17:21:29.850155    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:21:29.864397    4371 logs.go:276] 1 containers: [317ca800d163]
	I0923 17:21:29.864475    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:21:29.877178    4371 logs.go:276] 2 containers: [7b1f3fd302d6 ea8914f0f7c5]
	I0923 17:21:29.877251    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:21:29.890043    4371 logs.go:276] 0 containers: []
	W0923 17:21:29.890057    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:21:29.890132    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:21:29.905810    4371 logs.go:276] 2 containers: [05f10859c783 59e1929f4d8d]
	I0923 17:21:29.905827    4371 logs.go:123] Gathering logs for kube-proxy [317ca800d163] ...
	I0923 17:21:29.905833    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 317ca800d163"
	I0923 17:21:29.917531    4371 logs.go:123] Gathering logs for kube-controller-manager [7b1f3fd302d6] ...
	I0923 17:21:29.917542    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b1f3fd302d6"
	I0923 17:21:29.934541    4371 logs.go:123] Gathering logs for kube-apiserver [3b316c561070] ...
	I0923 17:21:29.934552    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b316c561070"
	I0923 17:21:29.945730    4371 logs.go:123] Gathering logs for etcd [49886fb2966e] ...
	I0923 17:21:29.945742    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49886fb2966e"
	I0923 17:21:29.960920    4371 logs.go:123] Gathering logs for kube-scheduler [8b9a027a5b5d] ...
	I0923 17:21:29.960930    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b9a027a5b5d"
	I0923 17:21:29.972292    4371 logs.go:123] Gathering logs for kube-controller-manager [ea8914f0f7c5] ...
	I0923 17:21:29.972304    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea8914f0f7c5"
	I0923 17:21:29.984331    4371 logs.go:123] Gathering logs for storage-provisioner [59e1929f4d8d] ...
	I0923 17:21:29.984345    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59e1929f4d8d"
	I0923 17:21:29.995565    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:21:29.995578    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:21:30.000045    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:21:30.000055    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:21:30.037466    4371 logs.go:123] Gathering logs for kube-scheduler [6b777bf4c964] ...
	I0923 17:21:30.037480    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b777bf4c964"
	I0923 17:21:30.049259    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:21:30.049271    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:21:30.090972    4371 logs.go:123] Gathering logs for storage-provisioner [05f10859c783] ...
	I0923 17:21:30.090981    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05f10859c783"
	I0923 17:21:30.102857    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:21:30.102868    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:21:30.125610    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:21:30.125620    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:21:30.138085    4371 logs.go:123] Gathering logs for kube-apiserver [fd00d1544c98] ...
	I0923 17:21:30.138101    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd00d1544c98"
	I0923 17:21:30.158826    4371 logs.go:123] Gathering logs for etcd [a84de2b73e49] ...
	I0923 17:21:30.158837    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a84de2b73e49"
	I0923 17:21:30.171299    4371 logs.go:123] Gathering logs for coredns [ad09aaa4e9bb] ...
	I0923 17:21:30.171313    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad09aaa4e9bb"
	I0923 17:21:32.542824    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:32.542880    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:32.685066    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:37.685884    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:37.685944    4371 kubeadm.go:597] duration metric: took 4m4.516184375s to restartPrimaryControlPlane
	W0923 17:21:37.686004    4371 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0923 17:21:37.686029    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0923 17:21:38.644560    4371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 17:21:38.649539    4371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 17:21:38.652256    4371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 17:21:38.655061    4371 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 17:21:38.655069    4371 kubeadm.go:157] found existing configuration files:
	
	I0923 17:21:38.655100    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0923 17:21:38.657805    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 17:21:38.657838    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 17:21:38.660340    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0923 17:21:38.663015    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 17:21:38.663047    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 17:21:38.666278    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0923 17:21:38.668834    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 17:21:38.668868    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 17:21:38.671492    4371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0923 17:21:38.674477    4371 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 17:21:38.674510    4371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 17:21:38.677117    4371 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 17:21:38.699632    4371 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0923 17:21:38.699660    4371 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 17:21:38.747623    4371 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 17:21:38.747698    4371 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 17:21:38.747748    4371 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 17:21:38.798257    4371 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 17:21:38.801577    4371 out.go:235]   - Generating certificates and keys ...
	I0923 17:21:38.801610    4371 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 17:21:38.801646    4371 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 17:21:38.801690    4371 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 17:21:38.801724    4371 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 17:21:38.801759    4371 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 17:21:38.801793    4371 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 17:21:38.801825    4371 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 17:21:38.801857    4371 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 17:21:38.801892    4371 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 17:21:38.801934    4371 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 17:21:38.801965    4371 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 17:21:38.801996    4371 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 17:21:38.926930    4371 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 17:21:39.015183    4371 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 17:21:39.133020    4371 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 17:21:39.233088    4371 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 17:21:39.267725    4371 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 17:21:39.268044    4371 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 17:21:39.268100    4371 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 17:21:39.356403    4371 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 17:21:37.544364    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:37.544407    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:39.360444    4371 out.go:235]   - Booting up control plane ...
	I0923 17:21:39.360490    4371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 17:21:39.360545    4371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 17:21:39.360580    4371 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 17:21:39.360847    4371 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 17:21:39.361612    4371 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 17:21:43.864087    4371 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502184 seconds
	I0923 17:21:43.864190    4371 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 17:21:43.868536    4371 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 17:21:44.376408    4371 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 17:21:44.376564    4371 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-903000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 17:21:44.882275    4371 kubeadm.go:310] [bootstrap-token] Using token: rwu6gf.h8ide94f0miso0i5
	I0923 17:21:44.888071    4371 out.go:235]   - Configuring RBAC rules ...
	I0923 17:21:44.888208    4371 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 17:21:44.893003    4371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 17:21:44.896162    4371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 17:21:44.897059    4371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 17:21:44.897996    4371 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 17:21:44.898786    4371 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 17:21:44.902285    4371 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 17:21:45.060064    4371 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 17:21:45.295159    4371 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 17:21:45.295690    4371 kubeadm.go:310] 
	I0923 17:21:45.295720    4371 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 17:21:45.295764    4371 kubeadm.go:310] 
	I0923 17:21:45.295807    4371 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 17:21:45.295811    4371 kubeadm.go:310] 
	I0923 17:21:45.295823    4371 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 17:21:45.295872    4371 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 17:21:45.295953    4371 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 17:21:45.295956    4371 kubeadm.go:310] 
	I0923 17:21:45.295985    4371 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 17:21:45.295987    4371 kubeadm.go:310] 
	I0923 17:21:45.296021    4371 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 17:21:45.296025    4371 kubeadm.go:310] 
	I0923 17:21:45.296082    4371 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 17:21:45.296139    4371 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 17:21:45.296215    4371 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 17:21:45.296225    4371 kubeadm.go:310] 
	I0923 17:21:45.296284    4371 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 17:21:45.296325    4371 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 17:21:45.296329    4371 kubeadm.go:310] 
	I0923 17:21:45.296393    4371 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rwu6gf.h8ide94f0miso0i5 \
	I0923 17:21:45.296450    4371 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9f5effcd2afcb047ae3a6a2be3abef4aeae2e1c83fa3875bd26ffc0e053ab789 \
	I0923 17:21:45.296461    4371 kubeadm.go:310] 	--control-plane 
	I0923 17:21:45.296465    4371 kubeadm.go:310] 
	I0923 17:21:45.296505    4371 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 17:21:45.296508    4371 kubeadm.go:310] 
	I0923 17:21:45.296553    4371 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rwu6gf.h8ide94f0miso0i5 \
	I0923 17:21:45.296628    4371 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9f5effcd2afcb047ae3a6a2be3abef4aeae2e1c83fa3875bd26ffc0e053ab789 
	I0923 17:21:45.296690    4371 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 17:21:45.296697    4371 cni.go:84] Creating CNI manager for ""
	I0923 17:21:45.296704    4371 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:21:45.299861    4371 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 17:21:45.306894    4371 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 17:21:45.309851    4371 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 17:21:45.314662    4371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 17:21:45.314743    4371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 17:21:45.314774    4371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-903000 minikube.k8s.io/updated_at=2024_09_23T17_21_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=running-upgrade-903000 minikube.k8s.io/primary=true
	I0923 17:21:45.358663    4371 ops.go:34] apiserver oom_adj: -16
	I0923 17:21:45.358715    4371 kubeadm.go:1113] duration metric: took 44.017875ms to wait for elevateKubeSystemPrivileges
	I0923 17:21:45.358727    4371 kubeadm.go:394] duration metric: took 4m12.216707125s to StartCluster
	I0923 17:21:45.358737    4371 settings.go:142] acquiring lock: {Name:mk533b8e20cbdc896b9e0666ee546603a1b156f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:21:45.358827    4371 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:21:45.359207    4371 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/kubeconfig: {Name:mk52c76fc8ff944a7bcab52e821c0354dabfa3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:21:45.359392    4371 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:21:45.359417    4371 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 17:21:45.359529    4371 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-903000"
	I0923 17:21:45.359539    4371 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-903000"
	W0923 17:21:45.359542    4371 addons.go:243] addon storage-provisioner should already be in state true
	I0923 17:21:45.359551    4371 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-903000"
	I0923 17:21:45.359554    4371 host.go:66] Checking if "running-upgrade-903000" exists ...
	I0923 17:21:45.359559    4371 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-903000"
	I0923 17:21:45.359552    4371 config.go:182] Loaded profile config "running-upgrade-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:21:45.360568    4371 kapi.go:59] client config for running-upgrade-903000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/running-upgrade-903000/client.key", CAFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106966030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 17:21:45.360688    4371 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-903000"
	W0923 17:21:45.360693    4371 addons.go:243] addon default-storageclass should already be in state true
	I0923 17:21:45.360699    4371 host.go:66] Checking if "running-upgrade-903000" exists ...
	I0923 17:21:45.363761    4371 out.go:177] * Verifying Kubernetes components...
	I0923 17:21:45.364070    4371 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 17:21:45.367969    4371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 17:21:45.367976    4371 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 17:21:45.373026    4371 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:21:42.546335    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:42.546373    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:45.373228    4371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:21:45.376939    4371 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 17:21:45.376955    4371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 17:21:45.376970    4371 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 17:21:45.456269    4371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 17:21:45.460949    4371 api_server.go:52] waiting for apiserver process to appear ...
	I0923 17:21:45.461002    4371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:21:45.464800    4371 api_server.go:72] duration metric: took 105.39825ms to wait for apiserver process to appear ...
	I0923 17:21:45.464807    4371 api_server.go:88] waiting for apiserver healthz status ...
	I0923 17:21:45.464814    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:45.494930    4371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 17:21:45.498236    4371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 17:21:45.821292    4371 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 17:21:45.821303    4371 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 17:21:47.548602    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:47.548656    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:50.465202    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:50.465278    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:52.551023    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:52.551065    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:55.465726    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:55.465772    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:57.553294    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:57.553482    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:21:57.564291    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:21:57.564379    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:21:57.574428    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:21:57.574517    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:21:57.585379    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:21:57.585483    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:21:57.595679    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:21:57.595768    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:21:57.606052    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:21:57.606131    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:21:57.616758    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:21:57.616841    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:21:57.631288    4508 logs.go:276] 0 containers: []
	W0923 17:21:57.631301    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:21:57.631375    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:21:57.641703    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:21:57.641724    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:21:57.641729    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:21:57.716636    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:21:57.716648    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:21:57.731237    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:21:57.731255    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:21:57.773418    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:21:57.773429    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:21:57.800392    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:21:57.800407    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:21:57.804700    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:21:57.804710    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:21:57.819575    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:21:57.819584    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:21:57.830997    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:21:57.831012    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:21:57.842529    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:21:57.842542    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:21:57.859968    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:21:57.859983    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:21:57.899651    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:21:57.899663    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:21:57.911331    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:21:57.911341    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:21:57.925089    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:21:57.925102    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:21:57.940538    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:21:57.940554    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:21:57.953537    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:21:57.953551    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:21:57.965651    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:21:57.965661    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:21:57.984413    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:21:57.984437    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:00.498836    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:00.466124    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:00.466151    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:05.501066    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:05.501213    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:05.513188    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:05.513283    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:05.524073    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:05.524171    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:05.534695    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:05.534779    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:05.546710    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:05.546794    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:05.557318    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:05.557394    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:05.568160    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:05.568248    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:05.580769    4508 logs.go:276] 0 containers: []
	W0923 17:22:05.580781    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:05.580856    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:05.591303    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:05.591321    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:05.591326    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:05.628374    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:05.628382    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:05.642205    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:05.642215    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:05.653688    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:05.653701    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:05.664443    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:05.664457    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:05.678252    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:05.678263    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:05.689643    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:05.689653    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:05.706616    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:05.706630    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:05.724416    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:05.724425    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:05.736610    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:05.736620    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:05.748144    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:05.748155    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:05.783976    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:05.783992    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:05.799623    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:05.799633    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:05.811831    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:05.811842    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:05.824245    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:05.824257    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:05.828594    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:05.828601    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:05.866592    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:05.866602    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:05.466561    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:05.466597    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:08.394566    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:10.466757    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:10.466782    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:15.466974    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:15.467012    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0923 17:22:15.823580    4371 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0923 17:22:15.828232    4371 out.go:177] * Enabled addons: storage-provisioner
	I0923 17:22:13.396074    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:13.396364    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:13.417886    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:13.418005    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:13.432253    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:13.432353    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:13.445274    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:13.445355    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:13.456435    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:13.456542    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:13.466916    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:13.466993    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:13.477866    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:13.477951    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:13.487614    4508 logs.go:276] 0 containers: []
	W0923 17:22:13.487636    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:13.487709    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:13.498275    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:13.498296    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:13.498302    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:13.516040    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:13.516055    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:13.531007    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:13.531021    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:13.542411    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:13.542423    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:13.553594    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:13.553606    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:13.566008    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:13.566023    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:13.604863    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:13.604877    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:13.608954    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:13.608961    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:13.622560    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:13.622575    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:13.634519    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:13.634533    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:13.648923    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:13.648932    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:13.661641    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:13.661656    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:13.700095    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:13.700104    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:13.738290    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:13.738306    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:13.750547    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:13.750558    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:13.771399    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:13.771413    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:13.783351    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:13.783361    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:16.309030    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:15.836191    4371 addons.go:510] duration metric: took 30.476988542s for enable addons: enabled=[storage-provisioner]
	I0923 17:22:21.310538    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:21.310713    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:21.322030    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:21.322114    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:21.333146    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:21.333221    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:21.344707    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:21.344813    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:21.355184    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:21.355271    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:21.365973    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:21.366063    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:21.376259    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:21.376333    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:21.386699    4508 logs.go:276] 0 containers: []
	W0923 17:22:21.386712    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:21.386776    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:21.397562    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:21.397580    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:21.397586    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:21.416997    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:21.417007    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:21.428428    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:21.428440    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:21.453729    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:21.453739    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:21.491792    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:21.491805    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:21.506223    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:21.506238    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:21.518549    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:21.518562    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:21.530809    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:21.530819    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:21.565786    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:21.565801    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:21.579576    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:21.579587    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:21.590998    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:21.591010    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:21.604809    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:21.604819    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:21.616633    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:21.616650    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:21.631423    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:21.631436    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:21.643586    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:21.643600    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:21.681594    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:21.681608    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:21.685868    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:21.685877    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:20.467248    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:20.467281    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:24.201893    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:25.468070    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:25.468113    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:29.204303    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:29.204577    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:29.222955    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:29.223066    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:29.236540    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:29.236629    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:29.254887    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:29.254968    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:29.265503    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:29.265592    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:29.275826    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:29.275904    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:29.286987    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:29.287072    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:29.296926    4508 logs.go:276] 0 containers: []
	W0923 17:22:29.296938    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:29.297009    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:29.307551    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:29.307571    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:29.307576    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:29.321895    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:29.321906    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:29.333253    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:29.333302    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:29.344712    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:29.344723    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:29.359142    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:29.359153    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:29.397542    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:29.397553    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:29.422282    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:29.422292    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:29.426953    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:29.426959    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:29.442400    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:29.442414    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:29.460847    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:29.460858    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:29.500830    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:29.500840    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:29.515165    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:29.515176    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:29.529744    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:29.529755    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:29.541588    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:29.541602    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:29.553292    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:29.553302    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:29.570019    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:29.570029    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:29.582913    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:29.582923    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:30.468668    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:30.468708    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:32.119739    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:35.469441    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:35.469464    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:37.122034    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:37.122288    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:37.137003    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:37.137104    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:37.149028    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:37.149118    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:37.159828    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:37.159916    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:37.170670    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:37.170758    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:37.180666    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:37.180751    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:37.190948    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:37.191034    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:37.201251    4508 logs.go:276] 0 containers: []
	W0923 17:22:37.201262    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:37.201339    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:37.211621    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:37.211638    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:37.211643    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:37.250542    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:37.250553    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:37.288675    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:37.288690    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:37.300640    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:37.300650    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:37.312444    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:37.312455    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:37.316534    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:37.316543    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:37.330984    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:37.330994    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:37.342022    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:37.342036    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:37.366814    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:37.366825    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:37.380860    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:37.380871    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:37.418554    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:37.418565    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:37.432817    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:37.432827    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:37.449569    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:37.449585    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:37.463576    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:37.463586    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:37.481164    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:37.481174    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:37.492114    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:37.492130    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:37.504255    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:37.504265    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:40.017581    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:40.469734    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:40.469763    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:45.020226    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:45.020537    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:45.046296    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:45.046450    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:45.063799    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:45.063909    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:45.079588    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:45.079677    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:45.091253    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:45.091342    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:45.101795    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:45.101872    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:45.114186    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:45.114267    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:45.124902    4508 logs.go:276] 0 containers: []
	W0923 17:22:45.124917    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:45.124986    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:45.135198    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:45.135217    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:45.135222    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:45.161204    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:45.161216    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:45.165361    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:45.165371    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:45.176323    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:45.176335    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:45.193596    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:45.193606    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:45.208403    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:45.208417    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:45.246430    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:45.246446    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:45.260476    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:45.260486    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:45.272211    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:45.272228    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:45.284271    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:45.284282    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:45.298616    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:45.298627    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:45.312512    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:45.312527    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:45.327163    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:45.327174    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:45.339951    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:45.339962    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:45.377360    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:45.377371    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:45.414164    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:45.414175    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:45.429042    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:45.429053    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:45.470776    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:45.470877    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:45.481918    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:22:45.482003    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:45.492126    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:22:45.492215    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:45.502683    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:22:45.502776    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:45.513317    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:22:45.513404    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:45.530156    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:22:45.530244    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:45.541299    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:22:45.541387    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:45.551832    4371 logs.go:276] 0 containers: []
	W0923 17:22:45.551845    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:45.551915    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:45.562597    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:22:45.562612    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:22:45.562618    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:22:45.574405    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:22:45.574420    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:22:45.586614    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:22:45.586624    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:22:45.598857    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:45.598867    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:45.624045    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:22:45.624055    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:45.635764    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:22:45.635775    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:22:45.649925    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:45.649935    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:45.654259    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:45.654266    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:45.695043    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:22:45.695053    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:22:45.713724    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:22:45.713735    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:22:45.728614    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:22:45.728625    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:22:45.744859    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:22:45.744870    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:22:45.762889    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:45.762901    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:47.948264    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:48.303697    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:52.951002    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:52.951321    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:52.977153    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:52.977281    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:52.993604    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:52.993702    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:53.006108    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:53.006200    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:53.017606    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:53.017695    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:53.027861    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:53.027942    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:53.039846    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:53.039928    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:53.056727    4508 logs.go:276] 0 containers: []
	W0923 17:22:53.056739    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:53.056814    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:53.066656    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:53.066674    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:53.066680    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:53.104405    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:53.104417    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:53.116966    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:53.116976    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:53.128452    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:53.128463    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:53.151823    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:53.151835    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:53.188902    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:53.188908    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:53.192737    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:53.192747    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:53.226700    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:53.226712    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:53.242848    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:53.242859    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:53.259825    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:53.259836    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:53.277207    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:53.277218    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:53.290963    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:53.290973    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:53.303499    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:53.303509    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:53.318674    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:53.318682    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:53.331334    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:53.331345    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:53.349382    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:53.349394    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:53.362456    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:53.362470    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:55.877415    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:53.306311    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:53.306413    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:53.317619    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:22:53.317705    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:53.329122    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:22:53.329212    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:53.341126    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:22:53.341269    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:53.354643    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:22:53.354733    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:53.366176    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:22:53.366262    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:53.378544    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:22:53.378631    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:53.388616    4371 logs.go:276] 0 containers: []
	W0923 17:22:53.388626    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:53.388696    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:53.403497    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:22:53.403510    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:53.403518    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:53.408863    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:22:53.408869    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:22:53.421052    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:22:53.421062    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:22:53.439062    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:22:53.439077    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:53.450854    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:53.450869    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:53.490589    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:22:53.490600    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:22:53.504598    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:22:53.504613    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:22:53.520628    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:22:53.520641    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:22:53.532447    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:22:53.532463    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:22:53.546996    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:22:53.547010    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:22:53.558918    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:22:53.558931    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:22:53.570478    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:53.570491    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:53.595358    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:53.595366    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:56.129734    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:00.879100    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:00.879292    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:00.893241    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:00.893338    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:00.907557    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:00.907649    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:00.918472    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:00.918548    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:00.930087    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:00.930171    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:00.940453    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:00.940535    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:00.950881    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:00.950958    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:00.965326    4508 logs.go:276] 0 containers: []
	W0923 17:23:00.965341    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:00.965414    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:00.976384    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:00.976404    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:00.976409    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:00.994998    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:00.995009    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:01.006659    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:01.006672    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:01.021400    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:01.021411    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:01.033295    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:01.033307    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:01.047570    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:01.047585    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:01.072947    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:01.072956    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:01.084619    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:01.084629    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:01.124265    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:01.124276    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:01.139113    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:01.139130    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:01.185970    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:01.185982    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:01.200732    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:01.200749    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:01.205042    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:01.205049    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:01.217344    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:01.217354    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:01.242453    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:01.242468    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:01.281414    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:01.281430    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:01.294185    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:01.294199    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:01.132039    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:01.132141    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:01.145667    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:01.145754    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:01.161114    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:01.161201    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:01.172038    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:01.172121    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:01.183046    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:01.183135    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:01.193840    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:01.193925    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:01.204954    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:01.205036    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:01.216165    4371 logs.go:276] 0 containers: []
	W0923 17:23:01.216182    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:01.216261    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:01.227938    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:01.227953    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:01.227960    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:01.253742    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:01.253762    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:01.291083    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:01.291100    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:01.303844    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:01.303856    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:01.318324    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:01.318337    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:01.332356    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:01.332372    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:01.343600    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:01.343612    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:01.355169    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:01.355184    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:01.370619    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:01.370634    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:01.390137    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:01.390149    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:01.429368    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:01.429375    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:01.434006    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:01.434015    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:01.446337    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:01.446348    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:03.808933    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:03.959524    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:08.811117    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:08.811391    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:08.839024    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:08.839141    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:08.853795    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:08.853894    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:08.866255    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:08.866349    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:08.877312    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:08.877399    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:08.887822    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:08.887909    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:08.898328    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:08.898414    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:08.908631    4508 logs.go:276] 0 containers: []
	W0923 17:23:08.908645    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:08.908720    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:08.919128    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:08.919151    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:08.919155    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:08.956914    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:08.956923    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:08.971161    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:08.971176    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:08.983381    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:08.983395    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:09.008618    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:09.008632    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:09.046783    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:09.046801    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:09.051763    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:09.051776    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:09.066160    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:09.066171    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:09.081176    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:09.081193    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:09.094298    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:09.094312    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:09.108208    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:09.108217    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:09.121273    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:09.121286    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:09.164231    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:09.164246    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:09.177146    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:09.177160    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:09.201091    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:09.201110    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:09.214106    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:09.214119    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:09.227129    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:09.227142    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:11.747964    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:08.961688    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:08.961784    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:08.973160    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:08.973250    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:08.985023    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:08.985105    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:08.996385    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:08.996470    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:09.012625    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:09.012711    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:09.023921    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:09.024006    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:09.035578    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:09.035679    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:09.047057    4371 logs.go:276] 0 containers: []
	W0923 17:23:09.047070    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:09.047144    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:09.062667    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:09.062685    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:09.062691    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:09.081854    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:09.081862    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:09.094651    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:09.094663    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:09.106784    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:09.106797    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:09.125528    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:09.125539    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:09.149905    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:09.149916    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:09.189034    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:09.189051    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:09.194016    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:09.194032    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:09.232123    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:09.232139    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:09.247584    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:09.247594    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:09.262835    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:09.262850    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:09.274718    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:09.274732    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:09.285592    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:09.285610    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:11.799566    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:16.750270    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:16.750493    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:16.770955    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:16.771073    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:16.785324    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:16.785419    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:16.797267    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:16.797343    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:16.810172    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:16.810261    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:16.821677    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:16.821764    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:16.834518    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:16.834598    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:16.846253    4508 logs.go:276] 0 containers: []
	W0923 17:23:16.846266    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:16.846341    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:16.857549    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:16.857568    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:16.857573    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:16.875132    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:16.875145    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:16.896356    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:16.896372    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:16.911727    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:16.911742    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:16.801724    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:16.801799    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:16.813186    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:16.813272    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:16.824858    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:16.824977    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:16.835949    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:16.836002    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:16.847686    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:16.847746    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:16.859024    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:16.859101    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:16.869976    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:16.870065    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:16.882113    4371 logs.go:276] 0 containers: []
	W0923 17:23:16.882125    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:16.882213    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:16.898123    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:16.898138    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:16.898143    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:16.935156    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:16.935170    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:16.950101    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:16.950112    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:16.962695    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:16.962707    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:16.989269    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:16.989287    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:17.002247    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:17.002266    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:17.015013    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:17.015031    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:17.033639    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:17.033653    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:17.075472    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:17.075492    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:17.080983    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:17.080994    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:17.096902    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:17.096912    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:17.109416    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:17.109429    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:17.121711    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:17.121725    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:16.953751    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:16.953768    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:16.989827    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:16.989838    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:17.002968    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:17.002976    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:17.015609    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:17.015618    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:17.035491    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:17.035501    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:17.050663    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:17.050678    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:17.065889    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:17.065900    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:17.078934    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:17.078945    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:17.098598    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:17.098609    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:17.114554    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:17.114566    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:17.119686    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:17.119698    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:17.160235    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:17.160250    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:17.185090    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:17.185102    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:19.696622    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:19.644417    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:24.697692    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:24.697807    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:24.710601    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:24.710693    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:24.721800    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:24.721892    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:24.733451    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:24.733538    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:24.745650    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:24.745737    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:24.756961    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:24.757052    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:24.768461    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:24.768551    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:24.779496    4508 logs.go:276] 0 containers: []
	W0923 17:23:24.779508    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:24.779584    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:24.790563    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:24.790581    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:24.790589    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:24.806237    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:24.806251    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:24.818602    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:24.818615    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:24.844074    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:24.844083    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:24.885456    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:24.885464    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:24.889928    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:24.889943    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:24.907267    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:24.907279    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:24.919966    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:24.919981    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:24.935845    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:24.935857    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:24.977322    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:24.977333    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:24.989015    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:24.989028    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:25.001894    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:25.001905    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:25.039429    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:25.039440    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:25.051512    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:25.051522    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:25.062812    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:25.062824    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:25.076691    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:25.076701    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:25.094322    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:25.094333    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:24.646692    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:24.647015    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:24.675276    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:24.675427    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:24.697744    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:24.697806    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:24.711717    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:24.711774    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:24.723255    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:24.723329    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:24.734852    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:24.734915    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:24.746324    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:24.746377    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:24.757318    4371 logs.go:276] 0 containers: []
	W0923 17:23:24.757326    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:24.757363    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:24.768810    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:24.768826    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:24.768831    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:24.785063    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:24.785081    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:24.800688    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:24.800706    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:24.813453    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:24.813465    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:24.829522    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:24.829533    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:24.842687    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:24.842699    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:24.885027    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:24.885041    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:24.922271    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:24.922283    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:24.939234    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:24.939248    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:24.957119    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:24.957129    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:24.969453    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:24.969465    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:24.994005    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:24.994021    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:24.998633    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:24.998645    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:27.608718    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:27.513447    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:32.610875    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:32.610970    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:32.622557    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:32.622646    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:32.634396    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:32.634479    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:32.646121    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:32.646205    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:32.657351    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:32.657438    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:32.668504    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:32.668586    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:32.683381    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:32.683468    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:32.694864    4508 logs.go:276] 0 containers: []
	W0923 17:23:32.694880    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:32.694954    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:32.706052    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:32.706072    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:32.706078    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:32.753212    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:32.753224    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:32.772290    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:32.772305    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:32.786755    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:32.786767    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:32.805181    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:32.805190    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:32.818430    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:32.818443    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:32.858304    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:32.858318    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:32.873296    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:32.873308    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:32.885458    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:32.885469    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:32.896527    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:32.896539    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:32.932158    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:32.932172    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:32.948636    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:32.948650    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:32.963794    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:32.963806    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:32.987554    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:32.987562    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:32.991978    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:32.991984    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:33.006175    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:33.006187    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:33.018524    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:33.018537    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:35.532146    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:32.515796    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:32.516118    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:32.539412    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:32.539546    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:32.554544    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:32.554638    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:32.567366    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:32.567460    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:32.581907    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:32.582012    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:32.592377    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:32.592464    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:32.603574    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:32.603658    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:32.614006    4371 logs.go:276] 0 containers: []
	W0923 17:23:32.614017    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:32.614085    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:32.625014    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:32.625029    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:32.625035    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:32.629819    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:32.629831    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:32.642260    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:32.642275    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:32.660263    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:32.660277    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:32.672695    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:32.672708    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:32.685738    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:32.685749    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:32.699887    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:32.699898    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:32.724735    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:32.724754    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:32.764003    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:32.764025    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:32.801788    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:32.801800    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:32.831350    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:32.831367    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:32.846419    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:32.846430    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:32.858605    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:32.858613    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:35.376294    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:40.534353    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:40.534455    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:40.546321    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:40.546414    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:40.559189    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:40.559281    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:40.570952    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:40.571040    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:40.586426    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:40.586519    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:40.598474    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:40.598562    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:40.610977    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:40.611063    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:40.621831    4508 logs.go:276] 0 containers: []
	W0923 17:23:40.621844    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:40.621918    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:40.633564    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:40.633586    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:40.633591    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:40.647148    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:40.647160    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:40.662605    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:40.662618    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:40.678071    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:40.678086    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:40.698489    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:40.698498    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:40.711056    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:40.711069    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:40.749870    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:40.749881    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:40.754057    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:40.754064    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:40.792056    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:40.792072    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:40.806188    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:40.806202    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:40.817043    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:40.817055    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:40.852290    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:40.852305    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:40.869877    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:40.869893    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:40.881225    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:40.881237    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:40.905453    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:40.905460    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:40.919464    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:40.919479    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:40.931249    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:40.931265    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:40.378687    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:40.378967    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:40.399332    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:40.399455    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:40.417680    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:40.417782    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:40.429364    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:40.429456    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:40.440187    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:40.440283    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:40.450816    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:40.450909    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:40.461448    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:40.461526    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:40.474343    4371 logs.go:276] 0 containers: []
	W0923 17:23:40.474356    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:40.474436    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:40.491008    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:40.491024    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:40.491030    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:40.530319    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:40.530332    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:40.534882    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:40.534888    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:40.572649    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:40.572659    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:40.590857    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:40.590871    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:40.603649    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:40.603663    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:40.616510    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:40.616522    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:40.628834    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:40.628847    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:40.641556    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:40.641569    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:40.666268    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:40.666280    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:40.681632    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:40.681650    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:40.697586    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:40.697597    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:40.715892    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:40.715902    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:43.444858    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:43.231748    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:48.447073    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:48.447174    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:48.458043    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:48.458131    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:48.469667    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:48.469757    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:48.485586    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:48.485676    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:48.497198    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:48.497287    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:48.508811    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:48.508887    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:48.520598    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:48.520691    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:48.531649    4508 logs.go:276] 0 containers: []
	W0923 17:23:48.531662    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:48.531737    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:48.543193    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:48.543212    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:48.543217    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:48.555673    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:48.555684    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:48.568095    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:48.568107    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:48.572728    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:48.572736    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:48.584792    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:48.584808    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:48.597223    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:48.597235    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:48.609411    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:48.609424    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:48.648433    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:48.648441    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:48.662940    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:48.662955    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:48.682062    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:48.682077    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:48.716443    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:48.716458    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:48.755237    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:48.755248    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:48.773784    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:48.773797    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:48.785177    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:48.785191    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:48.799922    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:48.799932    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:48.816778    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:48.816787    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:48.836340    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:48.836353    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:51.363262    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:48.233920    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:48.234056    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:48.245458    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:48.245546    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:48.259917    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:48.260006    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:48.270751    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:48.270833    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:48.281592    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:48.281676    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:48.291846    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:48.291939    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:48.301909    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:48.301982    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:48.312226    4371 logs.go:276] 0 containers: []
	W0923 17:23:48.312238    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:48.312301    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:48.322807    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:48.322827    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:48.322834    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:48.358217    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:48.358227    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:48.375184    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:48.375196    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:48.400712    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:48.400722    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:48.418087    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:48.418097    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:48.457606    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:48.457627    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:48.462838    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:48.462850    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:48.479507    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:48.479518    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:48.495553    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:48.495570    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:48.508730    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:48.508743    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:48.524456    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:48.524467    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:48.538464    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:48.538475    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:48.551629    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:48.551641    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:51.066642    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:56.365542    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:56.365638    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:56.376878    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:56.376963    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:56.387705    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:56.387794    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:56.398010    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:56.398099    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:56.408362    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:56.408447    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:56.418915    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:56.419004    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:56.429274    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:56.429351    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:56.439608    4508 logs.go:276] 0 containers: []
	W0923 17:23:56.439620    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:56.439689    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:56.450565    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:56.450586    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:56.450591    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:56.464400    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:56.464409    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:56.476172    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:56.476185    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:56.515264    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:56.515273    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:56.529615    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:56.529625    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:56.551186    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:56.551201    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:56.573710    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:56.573718    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:56.577480    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:56.577485    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:56.612571    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:56.612587    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:56.630976    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:56.630989    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:56.642855    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:56.642866    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:56.659976    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:56.659992    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:56.673621    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:56.673638    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:56.687809    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:56.687824    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:56.701582    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:56.701592    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:56.739618    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:56.739628    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:56.751102    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:56.751117    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:56.068951    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:56.069155    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:56.085979    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:23:56.086095    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:56.099430    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:23:56.099512    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:56.111196    4371 logs.go:276] 2 containers: [13581f2593f0 acf535e26be1]
	I0923 17:23:56.111287    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:56.121711    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:23:56.121794    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:56.132459    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:23:56.132547    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:56.143163    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:23:56.143251    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:56.153894    4371 logs.go:276] 0 containers: []
	W0923 17:23:56.153908    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:56.153985    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:56.164638    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:23:56.164657    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:56.164662    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:56.203621    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:23:56.203629    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:23:56.217796    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:23:56.217811    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:23:56.229367    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:23:56.229381    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:23:56.243713    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:23:56.243727    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:23:56.255266    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:23:56.255282    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:56.266965    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:56.266976    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:56.271690    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:56.271698    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:56.305789    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:23:56.305801    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:23:56.319641    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:23:56.319656    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:23:56.331164    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:23:56.331174    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:23:56.342544    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:23:56.342553    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:23:56.365790    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:56.365797    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:59.265557    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:58.894112    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:04.267759    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:04.267867    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:04.278929    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:04.279022    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:04.293612    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:04.293696    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:04.303872    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:04.303952    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:04.314319    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:04.314413    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:04.324812    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:04.324896    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:04.335592    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:04.335678    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:04.345823    4508 logs.go:276] 0 containers: []
	W0923 17:24:04.345835    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:04.345917    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:04.357708    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:04.357729    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:04.357735    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:04.369339    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:04.369352    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:04.373706    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:04.373713    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:04.410133    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:04.410148    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:04.421978    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:04.421990    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:04.445850    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:04.445858    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:04.461254    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:04.461264    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:04.475475    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:04.475489    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:04.493845    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:04.493859    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:04.505830    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:04.505845    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:04.519732    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:04.519745    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:04.557667    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:04.557681    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:04.571462    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:04.571472    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:04.585998    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:04.586013    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:04.625447    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:04.625455    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:04.636995    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:04.637005    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:04.648261    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:04.648272    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:03.896360    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:03.896720    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:03.926445    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:03.926588    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:03.944181    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:03.944282    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:03.957799    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:03.957894    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:03.972915    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:03.973000    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:03.983645    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:03.983733    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:03.994333    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:03.994407    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:04.004786    4371 logs.go:276] 0 containers: []
	W0923 17:24:04.004801    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:04.004865    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:04.015110    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:04.015126    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:04.015130    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:04.027133    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:04.027147    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:04.039064    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:04.039074    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:04.043476    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:04.043485    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:04.057821    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:04.057831    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:04.096902    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:04.096912    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:04.111554    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:04.111565    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:04.123737    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:04.123750    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:04.135613    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:04.135626    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:04.175192    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:04.175201    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:04.186889    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:04.186901    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:04.201177    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:04.201191    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:04.213012    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:04.213024    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:04.230711    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:04.230721    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:04.243304    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:04.243319    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:06.770287    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:07.161326    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:11.772961    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:11.773299    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:11.804952    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:11.805114    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:11.823353    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:11.823449    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:11.839510    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:11.839608    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:11.851683    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:11.851769    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:11.862245    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:11.862322    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:11.873501    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:11.873588    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:11.883942    4371 logs.go:276] 0 containers: []
	W0923 17:24:11.883958    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:11.884033    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:11.895196    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:11.895214    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:11.895220    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:11.900343    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:11.900352    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:11.925367    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:11.925379    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:11.949183    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:11.949192    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:11.967143    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:11.967155    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:11.979057    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:11.979082    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:11.990240    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:11.990252    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:12.002345    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:12.002356    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:12.017012    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:12.017027    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:12.029193    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:12.029205    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:12.043717    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:12.043730    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:12.076211    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:12.076225    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:12.114771    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:12.114784    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:12.150788    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:12.150805    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:12.162701    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:12.162709    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:12.161935    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:12.162060    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:12.174077    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:12.174164    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:12.185921    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:12.186005    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:12.197043    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:12.197120    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:12.207785    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:12.207866    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:12.217755    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:12.217840    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:12.231161    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:12.231233    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:12.241611    4508 logs.go:276] 0 containers: []
	W0923 17:24:12.241623    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:12.241699    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:12.252191    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:12.252209    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:12.252216    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:12.290526    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:12.290541    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:12.305550    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:12.305560    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:12.317499    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:12.317510    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:12.334887    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:12.334900    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:12.348105    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:12.348118    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:12.389290    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:12.389301    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:12.403638    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:12.403653    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:12.414921    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:12.414933    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:12.426192    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:12.426207    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:12.430520    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:12.430529    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:12.444164    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:12.444178    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:12.456356    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:12.456368    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:12.490552    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:12.490567    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:12.505569    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:12.505583    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:12.518154    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:12.518164    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:12.529779    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:12.529790    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:15.055563    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:14.686812    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:20.057839    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:20.058046    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:20.083545    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:20.083629    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:20.098176    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:20.098248    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:20.108416    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:20.108502    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:20.119476    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:20.119559    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:20.130132    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:20.130202    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:20.140885    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:20.140950    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:20.151371    4508 logs.go:276] 0 containers: []
	W0923 17:24:20.151384    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:20.151457    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:20.162228    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:20.162245    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:20.162250    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:20.176407    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:20.176420    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:20.195345    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:20.195359    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:20.211628    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:20.211644    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:20.223146    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:20.223157    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:20.227564    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:20.227571    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:20.241925    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:20.241936    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:20.277379    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:20.277390    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:20.289514    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:20.289527    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:20.311315    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:20.311323    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:20.322433    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:20.322445    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:20.336871    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:20.336885    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:20.349641    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:20.349657    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:20.362762    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:20.362785    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:20.385296    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:20.385310    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:20.399821    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:20.399837    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:20.439380    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:20.439390    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:19.689196    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:19.689663    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:19.718586    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:19.718741    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:19.736895    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:19.737005    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:19.750457    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:19.750557    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:19.762149    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:19.762232    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:19.772612    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:19.772695    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:19.783465    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:19.783550    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:19.794473    4371 logs.go:276] 0 containers: []
	W0923 17:24:19.794485    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:19.794563    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:19.807026    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:19.807043    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:19.807048    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:19.821814    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:19.821825    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:19.839781    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:19.839792    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:19.851596    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:19.851610    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:19.863353    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:19.863365    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:19.881125    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:19.881141    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:19.897102    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:19.897114    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:19.935905    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:19.935916    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:19.970619    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:19.970631    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:19.982701    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:19.982711    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:19.999070    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:19.999085    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:20.024717    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:20.024727    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:20.036359    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:20.036372    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:20.041633    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:20.041646    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:20.067045    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:20.067059    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:22.985105    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:22.581484    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:27.987451    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:27.987629    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:27.998777    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:27.998868    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:28.009249    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:28.009340    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:28.019760    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:28.019844    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:28.031053    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:28.031138    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:28.051519    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:28.051603    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:28.062532    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:28.062612    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:28.072393    4508 logs.go:276] 0 containers: []
	W0923 17:24:28.072405    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:28.072469    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:28.082849    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:28.082866    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:28.082871    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:28.100649    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:28.100660    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:28.123173    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:28.123185    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:28.162303    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:28.162320    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:28.199336    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:28.199346    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:28.219395    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:28.219406    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:28.232186    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:28.232198    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:28.244485    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:28.244496    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:28.266867    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:28.266876    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:28.301651    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:28.301666    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:28.316006    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:28.316016    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:28.330857    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:28.330868    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:28.343000    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:28.343012    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:28.348240    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:28.348253    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:28.363589    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:28.363599    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:28.375260    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:28.375271    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:28.395605    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:28.395620    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:30.910994    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:27.583705    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:27.583958    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:27.602387    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:27.602491    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:27.615550    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:27.615633    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:27.626370    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:27.626458    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:27.638334    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:27.638407    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:27.649308    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:27.649384    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:27.660241    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:27.660316    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:27.671202    4371 logs.go:276] 0 containers: []
	W0923 17:24:27.671215    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:27.671276    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:27.682445    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:27.682463    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:27.682470    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:27.721793    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:27.721801    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:27.735422    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:27.735433    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:27.746977    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:27.746988    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:27.758639    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:27.758649    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:27.763281    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:27.763291    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:27.778205    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:27.778215    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:27.795003    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:27.795013    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:27.809902    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:27.809913    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:27.821976    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:27.821987    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:27.833468    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:27.833479    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:27.856977    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:27.856985    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:27.892181    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:27.892197    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:27.909175    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:27.909188    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:27.921277    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:27.921290    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:30.439971    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:35.913324    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:35.913492    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:35.929198    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:35.929298    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:35.939811    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:35.939900    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:35.950529    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:35.950608    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:35.962797    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:35.962881    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:35.978151    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:35.978237    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:35.989453    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:35.989536    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:36.000251    4508 logs.go:276] 0 containers: []
	W0923 17:24:36.000264    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:36.000336    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:36.010996    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:36.011016    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:36.011021    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:36.034812    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:36.034821    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:36.068310    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:36.068325    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:36.083234    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:36.083245    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:36.094593    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:36.094605    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:36.106201    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:36.106213    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:36.124997    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:36.125007    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:36.137382    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:36.137393    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:36.176320    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:36.176334    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:36.180856    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:36.180865    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:36.224358    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:36.224376    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:36.237604    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:36.237616    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:36.250165    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:36.250176    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:36.264641    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:36.264651    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:36.279392    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:36.279405    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:36.296097    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:36.296111    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:36.308293    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:36.308309    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:35.441936    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:35.442253    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:35.461248    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:35.461370    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:35.475719    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:35.475821    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:35.488094    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:35.488175    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:35.498958    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:35.499047    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:35.509558    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:35.509653    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:35.520190    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:35.520278    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:35.530744    4371 logs.go:276] 0 containers: []
	W0923 17:24:35.530756    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:35.530829    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:35.541402    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:35.541420    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:35.541426    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:35.580760    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:35.580771    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:35.595414    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:35.595424    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:35.607816    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:35.607827    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:35.629111    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:35.629124    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:35.646364    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:35.646374    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:35.657181    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:35.657192    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:35.661736    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:35.661743    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:35.673461    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:35.673472    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:35.696892    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:35.696900    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:35.718047    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:35.718058    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:35.730309    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:35.730324    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:35.765657    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:35.765668    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:35.780840    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:35.780852    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:35.793039    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:35.793054    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:38.823688    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:38.307348    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:43.825262    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:43.825440    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:43.842220    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:43.842326    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:43.859780    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:43.859873    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:43.871061    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:43.871143    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:43.881698    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:43.881787    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:43.892333    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:43.892423    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:43.903204    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:43.903289    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:43.913218    4508 logs.go:276] 0 containers: []
	W0923 17:24:43.913235    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:43.913311    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:43.923934    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:43.923953    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:43.923958    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:43.957988    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:43.958004    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:43.995331    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:43.995345    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:44.012535    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:44.012549    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:44.026255    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:44.026269    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:44.038178    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:44.038193    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:44.042388    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:44.042398    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:44.056732    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:44.056742    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:44.071583    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:44.071594    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:44.083316    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:44.083325    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:44.098809    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:44.098824    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:44.122552    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:44.122565    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:44.161775    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:44.161786    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:44.184425    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:44.184433    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:44.207397    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:44.207409    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:44.223465    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:44.223476    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:44.243261    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:44.243277    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:46.757364    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:43.309975    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:43.310153    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:43.323235    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:43.323326    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:43.334559    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:43.334647    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:43.346701    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:43.346792    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:43.358272    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:43.358360    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:43.368298    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:43.368383    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:43.379178    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:43.379253    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:43.389327    4371 logs.go:276] 0 containers: []
	W0923 17:24:43.389339    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:43.389415    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:43.406700    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:43.406715    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:43.406721    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:43.418604    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:43.418613    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:43.442597    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:43.442604    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:43.454065    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:43.454076    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:43.491947    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:43.491963    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:43.504090    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:43.504100    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:43.518987    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:43.518997    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:43.533121    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:43.533132    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:43.574878    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:43.574892    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:43.587303    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:43.587315    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:43.599191    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:43.599203    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:43.618534    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:43.618548    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:43.623269    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:43.623275    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:43.635436    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:43.635450    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:43.646865    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:43.646874    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:46.164179    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:51.759638    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:51.759888    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:51.781518    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:51.781639    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:51.796808    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:51.796910    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:51.809528    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:51.809607    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:51.820423    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:51.820514    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:51.830664    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:51.830741    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:51.840944    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:51.841031    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:51.851365    4508 logs.go:276] 0 containers: []
	W0923 17:24:51.851376    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:51.851447    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:51.861567    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:51.861582    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:51.861587    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:51.865551    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:51.865559    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:51.902546    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:51.902557    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:51.919500    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:51.919511    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:51.930770    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:51.930782    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:51.165558    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:51.165865    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:51.185178    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:51.185287    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:51.199771    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:51.199873    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:51.212617    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:51.212707    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:51.223622    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:51.223700    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:51.234713    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:51.234801    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:51.245934    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:51.246047    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:51.262196    4371 logs.go:276] 0 containers: []
	W0923 17:24:51.262208    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:51.262285    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:51.273203    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:51.273220    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:51.273225    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:24:51.288224    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:51.288238    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:51.299196    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:51.299207    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:51.317287    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:51.317297    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:51.328593    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:51.328603    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:51.362811    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:51.362823    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:51.377941    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:51.377952    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:51.382316    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:51.382322    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:51.393687    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:51.393699    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:51.432689    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:51.432698    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:51.447340    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:51.447357    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:51.460494    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:51.460507    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:51.481932    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:51.481946    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:51.493634    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:51.493650    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:51.504973    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:51.504990    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:51.943225    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:51.943237    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:51.977722    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:51.977738    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:51.992176    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:51.992190    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:52.006164    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:52.006180    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:52.017692    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:52.017706    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:52.030149    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:52.030165    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:52.067413    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:52.067426    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:52.081988    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:52.081999    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:52.093584    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:52.093597    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:52.104866    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:52.104876    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:52.120434    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:52.120445    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:52.131788    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:52.131801    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:54.656081    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:54.032561    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:59.658431    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:59.658495    4508 kubeadm.go:597] duration metric: took 4m3.885519125s to restartPrimaryControlPlane
	W0923 17:24:59.658573    4508 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0923 17:24:59.658600    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0923 17:25:00.648521    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 17:25:00.653675    4508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 17:25:00.656618    4508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 17:25:00.659413    4508 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 17:25:00.659419    4508 kubeadm.go:157] found existing configuration files:
	
	I0923 17:25:00.659445    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf
	I0923 17:25:00.662004    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 17:25:00.662036    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 17:25:00.664639    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf
	I0923 17:25:00.667888    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 17:25:00.667916    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 17:25:00.671166    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf
	I0923 17:25:00.673701    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 17:25:00.673728    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 17:25:00.676514    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf
	I0923 17:25:00.679140    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 17:25:00.679170    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 17:25:00.681848    4508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 17:25:00.697744    4508 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0923 17:25:00.697817    4508 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 17:25:00.746918    4508 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 17:25:00.746984    4508 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 17:25:00.747046    4508 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 17:25:00.794786    4508 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 17:25:00.802008    4508 out.go:235]   - Generating certificates and keys ...
	I0923 17:25:00.802044    4508 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 17:25:00.802078    4508 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 17:25:00.802117    4508 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 17:25:00.802149    4508 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 17:25:00.802195    4508 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 17:25:00.802224    4508 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 17:25:00.802257    4508 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 17:25:00.802291    4508 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 17:25:00.802328    4508 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 17:25:00.802369    4508 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 17:25:00.802398    4508 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 17:25:00.802431    4508 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 17:25:00.841130    4508 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 17:25:00.921899    4508 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 17:25:01.017865    4508 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 17:25:01.414135    4508 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 17:25:01.442257    4508 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 17:25:01.442649    4508 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 17:25:01.442674    4508 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 17:25:01.537344    4508 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 17:25:01.545488    4508 out.go:235]   - Booting up control plane ...
	I0923 17:25:01.545541    4508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 17:25:01.545583    4508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 17:25:01.545618    4508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 17:25:01.545659    4508 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 17:25:01.545754    4508 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 17:24:59.033169    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:59.033399    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:59.050970    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:24:59.051074    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:59.063990    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:24:59.064080    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:59.075853    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:24:59.075943    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:59.090396    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:24:59.090474    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:59.102620    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:24:59.102697    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:59.113396    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:24:59.113481    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:59.123751    4371 logs.go:276] 0 containers: []
	W0923 17:24:59.123764    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:59.123838    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:59.134447    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:24:59.134467    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:24:59.134473    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:24:59.148087    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:24:59.148098    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:24:59.159952    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:59.159964    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:59.196287    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:59.196298    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:59.200804    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:24:59.200813    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:24:59.212006    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:24:59.212017    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:24:59.226898    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:59.226908    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:59.264758    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:24:59.264766    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:24:59.276911    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:24:59.276923    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:24:59.298855    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:59.298864    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:59.323597    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:24:59.323613    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:59.335479    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:24:59.335494    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:24:59.349438    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:24:59.349453    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:24:59.363423    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:24:59.363438    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:24:59.380071    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:24:59.380089    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:01.896427    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:06.040119    4508 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501354 seconds
	I0923 17:25:06.040187    4508 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 17:25:06.043632    4508 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 17:25:06.556074    4508 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 17:25:06.556425    4508 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-180000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 17:25:07.060396    4508 kubeadm.go:310] [bootstrap-token] Using token: v1uqfy.5rc75n0j3i4peg2o
	I0923 17:25:06.898692    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:06.898919    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:25:06.918921    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:25:06.919013    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:25:06.930969    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:25:06.931051    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:25:06.941869    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:25:06.941964    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:25:06.952929    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:25:06.953012    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:25:06.964007    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:25:06.964094    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:25:06.975187    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:25:06.975264    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:25:06.985805    4371 logs.go:276] 0 containers: []
	W0923 17:25:06.985817    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:25:06.985887    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:25:06.997459    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:25:06.997476    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:25:06.997482    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:25:07.034469    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:25:07.034483    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:25:07.047060    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:25:07.047072    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:25:07.060304    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:25:07.060318    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:25:07.085322    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:25:07.085335    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:25:07.089949    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:25:07.089957    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:25:07.104280    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:25:07.104293    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:25:07.116476    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:25:07.116488    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:25:07.131784    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:25:07.131799    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:25:07.173820    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:25:07.173835    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:25:07.187770    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:25:07.187786    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:25:07.199332    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:25:07.199347    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:07.066210    4508 out.go:235]   - Configuring RBAC rules ...
	I0923 17:25:07.066323    4508 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 17:25:07.066459    4508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 17:25:07.072767    4508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 17:25:07.073709    4508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 17:25:07.074754    4508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 17:25:07.075766    4508 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 17:25:07.079087    4508 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 17:25:07.275210    4508 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 17:25:07.464851    4508 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 17:25:07.465340    4508 kubeadm.go:310] 
	I0923 17:25:07.465371    4508 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 17:25:07.465375    4508 kubeadm.go:310] 
	I0923 17:25:07.465420    4508 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 17:25:07.465426    4508 kubeadm.go:310] 
	I0923 17:25:07.465438    4508 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 17:25:07.465488    4508 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 17:25:07.465518    4508 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 17:25:07.465521    4508 kubeadm.go:310] 
	I0923 17:25:07.465553    4508 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 17:25:07.465556    4508 kubeadm.go:310] 
	I0923 17:25:07.465581    4508 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 17:25:07.465584    4508 kubeadm.go:310] 
	I0923 17:25:07.465609    4508 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 17:25:07.465665    4508 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 17:25:07.465704    4508 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 17:25:07.465710    4508 kubeadm.go:310] 
	I0923 17:25:07.465762    4508 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 17:25:07.465802    4508 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 17:25:07.465805    4508 kubeadm.go:310] 
	I0923 17:25:07.465868    4508 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v1uqfy.5rc75n0j3i4peg2o \
	I0923 17:25:07.465943    4508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9f5effcd2afcb047ae3a6a2be3abef4aeae2e1c83fa3875bd26ffc0e053ab789 \
	I0923 17:25:07.465955    4508 kubeadm.go:310] 	--control-plane 
	I0923 17:25:07.465957    4508 kubeadm.go:310] 
	I0923 17:25:07.466025    4508 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 17:25:07.466029    4508 kubeadm.go:310] 
	I0923 17:25:07.466088    4508 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v1uqfy.5rc75n0j3i4peg2o \
	I0923 17:25:07.466146    4508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9f5effcd2afcb047ae3a6a2be3abef4aeae2e1c83fa3875bd26ffc0e053ab789 
	I0923 17:25:07.466208    4508 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 17:25:07.466218    4508 cni.go:84] Creating CNI manager for ""
	I0923 17:25:07.466227    4508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:25:07.470731    4508 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 17:25:07.478739    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 17:25:07.481650    4508 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 17:25:07.486275    4508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 17:25:07.486317    4508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 17:25:07.486345    4508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-180000 minikube.k8s.io/updated_at=2024_09_23T17_25_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=stopped-upgrade-180000 minikube.k8s.io/primary=true
	I0923 17:25:07.529919    4508 kubeadm.go:1113] duration metric: took 43.636208ms to wait for elevateKubeSystemPrivileges
	I0923 17:25:07.529927    4508 ops.go:34] apiserver oom_adj: -16
	I0923 17:25:07.529936    4508 kubeadm.go:394] duration metric: took 4m11.770601792s to StartCluster
	I0923 17:25:07.529945    4508 settings.go:142] acquiring lock: {Name:mk533b8e20cbdc896b9e0666ee546603a1b156f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:25:07.530032    4508 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:25:07.530433    4508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/kubeconfig: {Name:mk52c76fc8ff944a7bcab52e821c0354dabfa3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:25:07.530655    4508 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:25:07.530663    4508 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 17:25:07.530697    4508 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-180000"
	I0923 17:25:07.530707    4508 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-180000"
	W0923 17:25:07.530710    4508 addons.go:243] addon storage-provisioner should already be in state true
	I0923 17:25:07.530721    4508 host.go:66] Checking if "stopped-upgrade-180000" exists ...
	I0923 17:25:07.530732    4508 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:25:07.530770    4508 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-180000"
	I0923 17:25:07.530775    4508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-180000"
	I0923 17:25:07.531704    4508 kapi.go:59] client config for stopped-upgrade-180000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/client.key", CAFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10287a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 17:25:07.531826    4508 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-180000"
	W0923 17:25:07.531830    4508 addons.go:243] addon default-storageclass should already be in state true
	I0923 17:25:07.531837    4508 host.go:66] Checking if "stopped-upgrade-180000" exists ...
	I0923 17:25:07.533662    4508 out.go:177] * Verifying Kubernetes components...
	I0923 17:25:07.534002    4508 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 17:25:07.537855    4508 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 17:25:07.537861    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0923 17:25:07.541618    4508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:25:07.545668    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:25:07.549742    4508 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 17:25:07.549749    4508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 17:25:07.549755    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0923 17:25:07.635938    4508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 17:25:07.641856    4508 api_server.go:52] waiting for apiserver process to appear ...
	I0923 17:25:07.641901    4508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:25:07.645984    4508 api_server.go:72] duration metric: took 115.320125ms to wait for apiserver process to appear ...
	I0923 17:25:07.645992    4508 api_server.go:88] waiting for apiserver healthz status ...
	I0923 17:25:07.646000    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:07.651629    4508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 17:25:07.707092    4508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 17:25:08.018498    4508 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 17:25:08.018510    4508 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 17:25:07.214527    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:25:07.214540    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:25:07.226517    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:25:07.226528    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:25:07.243490    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:25:07.243505    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:25:09.761466    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:12.648056    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:12.648110    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:14.763794    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:14.764064    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:25:14.784284    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:25:14.784397    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:25:14.798333    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:25:14.798426    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:25:14.810129    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:25:14.810205    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:25:14.820895    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:25:14.820977    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:25:14.831196    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:25:14.831286    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:25:14.841937    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:25:14.842020    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:25:14.852387    4371 logs.go:276] 0 containers: []
	W0923 17:25:14.852399    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:25:14.852466    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:25:14.862635    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:25:14.862652    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:25:14.862658    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:25:14.874946    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:25:14.874959    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:25:14.886874    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:25:14.886884    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:25:14.891990    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:25:14.891997    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:25:14.927097    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:25:14.927107    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:25:14.942327    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:25:14.942337    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:25:14.960711    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:25:14.960723    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:25:14.975334    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:25:14.975347    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:25:14.987639    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:25:14.987650    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:25:15.011300    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:25:15.011309    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:25:15.022949    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:25:15.022962    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:25:15.060877    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:25:15.060886    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:15.075181    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:25:15.075196    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:25:15.086982    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:25:15.087000    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:25:15.098689    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:25:15.098705    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:25:17.648395    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:17.648428    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:17.621812    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:22.649096    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:22.649117    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:22.624019    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:22.624157    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:25:22.635864    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:25:22.635957    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:25:22.646399    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:25:22.646493    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:25:22.657245    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:25:22.657327    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:25:22.668508    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:25:22.668596    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:25:22.682216    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:25:22.682292    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:25:22.693121    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:25:22.693204    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:25:22.705680    4371 logs.go:276] 0 containers: []
	W0923 17:25:22.705692    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:25:22.705762    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:25:22.715876    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:25:22.715895    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:25:22.715900    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:25:22.720538    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:25:22.720549    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:25:22.732762    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:25:22.732776    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:25:22.747187    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:25:22.747198    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:25:22.764958    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:25:22.764967    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:25:22.778360    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:25:22.778373    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:25:22.819157    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:25:22.819169    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:22.833668    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:25:22.833677    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:25:22.847894    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:25:22.847904    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:25:22.860359    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:25:22.860370    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:25:22.872208    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:25:22.872220    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:25:22.906900    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:25:22.906913    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:25:22.920928    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:25:22.920942    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:25:22.935118    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:25:22.935129    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:25:22.946896    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:25:22.946908    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:25:25.473508    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:27.649748    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:27.649787    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:30.473897    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:30.474148    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:25:30.496488    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:25:30.496605    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:25:30.511866    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:25:30.511964    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:25:30.525079    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:25:30.525175    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:25:30.535763    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:25:30.535841    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:25:30.550492    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:25:30.550572    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:25:30.561089    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:25:30.561172    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:25:30.571748    4371 logs.go:276] 0 containers: []
	W0923 17:25:30.571761    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:25:30.571837    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:25:30.582367    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:25:30.582386    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:25:30.582393    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:25:30.594551    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:25:30.594563    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:25:30.599436    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:25:30.599442    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:25:30.612244    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:25:30.612255    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:25:30.623948    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:25:30.623963    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:25:30.647307    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:25:30.647318    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:25:30.662445    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:25:30.662455    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:25:30.676108    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:25:30.676118    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:25:30.694396    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:25:30.694407    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:25:30.734186    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:25:30.734196    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:30.748644    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:25:30.748654    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:25:30.760078    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:25:30.760089    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:25:30.771956    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:25:30.771967    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:25:30.806620    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:25:30.806633    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:25:30.820680    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:25:30.820694    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:25:32.650460    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:32.650489    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:33.334968    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:37.651304    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:37.651329    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0923 17:25:38.020524    4508 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0923 17:25:38.029743    4508 out.go:177] * Enabled addons: storage-provisioner
	I0923 17:25:38.037706    4508 addons.go:510] duration metric: took 30.507258541s for enable addons: enabled=[storage-provisioner]
	I0923 17:25:38.337230    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:38.337490    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:25:38.367066    4371 logs.go:276] 1 containers: [92defea7a2e0]
	I0923 17:25:38.367200    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:25:38.382974    4371 logs.go:276] 1 containers: [44b700080a96]
	I0923 17:25:38.383075    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:25:38.396315    4371 logs.go:276] 4 containers: [914c00c75beb 42a6d3d4a08f 13581f2593f0 acf535e26be1]
	I0923 17:25:38.396407    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:25:38.407572    4371 logs.go:276] 1 containers: [30d3a74c9d15]
	I0923 17:25:38.407654    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:25:38.418043    4371 logs.go:276] 1 containers: [dcc7c5ea88d5]
	I0923 17:25:38.418129    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:25:38.428196    4371 logs.go:276] 1 containers: [d1912ab1fefc]
	I0923 17:25:38.428287    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:25:38.438381    4371 logs.go:276] 0 containers: []
	W0923 17:25:38.438394    4371 logs.go:278] No container was found matching "kindnet"
	I0923 17:25:38.438475    4371 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:25:38.449082    4371 logs.go:276] 1 containers: [360508e123ae]
	I0923 17:25:38.449100    4371 logs.go:123] Gathering logs for coredns [13581f2593f0] ...
	I0923 17:25:38.449106    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13581f2593f0"
	I0923 17:25:38.460942    4371 logs.go:123] Gathering logs for kube-controller-manager [d1912ab1fefc] ...
	I0923 17:25:38.460956    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1912ab1fefc"
	I0923 17:25:38.481860    4371 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:25:38.481872    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:25:38.517025    4371 logs.go:123] Gathering logs for kube-apiserver [92defea7a2e0] ...
	I0923 17:25:38.517039    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92defea7a2e0"
	I0923 17:25:38.531633    4371 logs.go:123] Gathering logs for kube-scheduler [30d3a74c9d15] ...
	I0923 17:25:38.531647    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30d3a74c9d15"
	I0923 17:25:38.546862    4371 logs.go:123] Gathering logs for storage-provisioner [360508e123ae] ...
	I0923 17:25:38.546874    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 360508e123ae"
	I0923 17:25:38.558018    4371 logs.go:123] Gathering logs for Docker ...
	I0923 17:25:38.558033    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:25:38.580798    4371 logs.go:123] Gathering logs for kubelet ...
	I0923 17:25:38.580806    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:25:38.619909    4371 logs.go:123] Gathering logs for dmesg ...
	I0923 17:25:38.619918    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:25:38.624891    4371 logs.go:123] Gathering logs for kube-proxy [dcc7c5ea88d5] ...
	I0923 17:25:38.624900    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcc7c5ea88d5"
	I0923 17:25:38.636609    4371 logs.go:123] Gathering logs for etcd [44b700080a96] ...
	I0923 17:25:38.636622    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44b700080a96"
	I0923 17:25:38.650205    4371 logs.go:123] Gathering logs for coredns [914c00c75beb] ...
	I0923 17:25:38.650218    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 914c00c75beb"
	I0923 17:25:38.661770    4371 logs.go:123] Gathering logs for coredns [42a6d3d4a08f] ...
	I0923 17:25:38.661781    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42a6d3d4a08f"
	I0923 17:25:38.678160    4371 logs.go:123] Gathering logs for coredns [acf535e26be1] ...
	I0923 17:25:38.678172    4371 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf535e26be1"
	I0923 17:25:38.689550    4371 logs.go:123] Gathering logs for container status ...
	I0923 17:25:38.689559    4371 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:25:41.203083    4371 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:46.205452    4371 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:46.210182    4371 out.go:201] 
	W0923 17:25:46.213125    4371 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0923 17:25:46.213143    4371 out.go:270] * 
	W0923 17:25:46.214646    4371 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:25:46.224089    4371 out.go:201] 
	I0923 17:25:42.652420    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:42.652469    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:47.654260    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:47.654318    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:52.656192    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:52.656237    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:57.658512    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:57.658536    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-09-24 00:16:44 UTC, ends at Tue 2024-09-24 00:26:02 UTC. --
	Sep 24 00:25:47 running-upgrade-903000 dockerd[3218]: time="2024-09-24T00:25:47.239653270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 24 00:25:47 running-upgrade-903000 dockerd[3218]: time="2024-09-24T00:25:47.239741557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 24 00:25:47 running-upgrade-903000 dockerd[3218]: time="2024-09-24T00:25:47.239782264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 24 00:25:47 running-upgrade-903000 dockerd[3218]: time="2024-09-24T00:25:47.239862426Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e488e70c280542d49e65c13393556c46128670011d1585f7b95249726c7c0bca pid=19405 runtime=io.containerd.runc.v2
	Sep 24 00:25:47 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:47Z" level=error msg="ContainerStats resp: {0x4000621b80 linux}"
	Sep 24 00:25:47 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:47Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 24 00:25:48 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:48Z" level=error msg="ContainerStats resp: {0x4000982140 linux}"
	Sep 24 00:25:48 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:48Z" level=error msg="ContainerStats resp: {0x40009826c0 linux}"
	Sep 24 00:25:48 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:48Z" level=error msg="ContainerStats resp: {0x40007df180 linux}"
	Sep 24 00:25:48 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:48Z" level=error msg="ContainerStats resp: {0x40007df940 linux}"
	Sep 24 00:25:48 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:48Z" level=error msg="ContainerStats resp: {0x40007dfa80 linux}"
	Sep 24 00:25:48 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:48Z" level=error msg="ContainerStats resp: {0x40005046c0 linux}"
	Sep 24 00:25:48 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:48Z" level=error msg="ContainerStats resp: {0x4000504d00 linux}"
	Sep 24 00:25:52 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:52Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 24 00:25:57 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 24 00:25:58 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:58Z" level=error msg="ContainerStats resp: {0x4000ac0e80 linux}"
	Sep 24 00:25:58 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:58Z" level=error msg="ContainerStats resp: {0x40009cb400 linux}"
	Sep 24 00:25:59 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:25:59Z" level=error msg="ContainerStats resp: {0x4000621500 linux}"
	Sep 24 00:26:00 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:26:00Z" level=error msg="ContainerStats resp: {0x4000983600 linux}"
	Sep 24 00:26:00 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:26:00Z" level=error msg="ContainerStats resp: {0x4000983a40 linux}"
	Sep 24 00:26:00 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:26:00Z" level=error msg="ContainerStats resp: {0x4000983ec0 linux}"
	Sep 24 00:26:00 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:26:00Z" level=error msg="ContainerStats resp: {0x400047a040 linux}"
	Sep 24 00:26:00 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:26:00Z" level=error msg="ContainerStats resp: {0x400047a880 linux}"
	Sep 24 00:26:00 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:26:00Z" level=error msg="ContainerStats resp: {0x4000504100 linux}"
	Sep 24 00:26:00 running-upgrade-903000 cri-dockerd[3062]: time="2024-09-24T00:26:00Z" level=error msg="ContainerStats resp: {0x4000504e40 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1619b220222a7       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   0be392734229d
	e488e70c28054       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   3b5f7a8bf3260
	914c00c75beb8       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   0be392734229d
	42a6d3d4a08f3       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   3b5f7a8bf3260
	dcc7c5ea88d59       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   4bea63557588f
	360508e123ae6       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   09d350110f13c
	44b700080a96e       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   3fc139f800633
	30d3a74c9d153       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   ceeac0ea679bb
	d1912ab1fefcf       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   0f0c54f5728ba
	92defea7a2e0d       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   7439c550d7789
	
	
	==> coredns [1619b220222a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 839807477419575619.2113784100578149391. HINFO: read udp 10.244.0.3:43028->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 839807477419575619.2113784100578149391. HINFO: read udp 10.244.0.3:56171->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 839807477419575619.2113784100578149391. HINFO: read udp 10.244.0.3:37449->10.0.2.3:53: i/o timeout
	
	
	==> coredns [42a6d3d4a08f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4582885251795777672.4915522794420536494. HINFO: read udp 10.244.0.2:37960->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4582885251795777672.4915522794420536494. HINFO: read udp 10.244.0.2:49941->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4582885251795777672.4915522794420536494. HINFO: read udp 10.244.0.2:37881->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4582885251795777672.4915522794420536494. HINFO: read udp 10.244.0.2:44736->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4582885251795777672.4915522794420536494. HINFO: read udp 10.244.0.2:33701->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4582885251795777672.4915522794420536494. HINFO: read udp 10.244.0.2:50225->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4582885251795777672.4915522794420536494. HINFO: read udp 10.244.0.2:57781->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4582885251795777672.4915522794420536494. HINFO: read udp 10.244.0.2:40994->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4582885251795777672.4915522794420536494. HINFO: read udp 10.244.0.2:44976->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4582885251795777672.4915522794420536494. HINFO: read udp 10.244.0.2:36349->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [914c00c75beb] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3855020416681839441.3823420122438742591. HINFO: read udp 10.244.0.3:42971->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3855020416681839441.3823420122438742591. HINFO: read udp 10.244.0.3:56671->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3855020416681839441.3823420122438742591. HINFO: read udp 10.244.0.3:39992->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3855020416681839441.3823420122438742591. HINFO: read udp 10.244.0.3:34081->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e488e70c2805] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3991959850928059311.4358713515825839945. HINFO: read udp 10.244.0.2:52275->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3991959850928059311.4358713515825839945. HINFO: read udp 10.244.0.2:42127->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3991959850928059311.4358713515825839945. HINFO: read udp 10.244.0.2:52720->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-903000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-903000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=running-upgrade-903000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T17_21_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:21:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-903000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:26:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:21:45 +0000   Tue, 24 Sep 2024 00:21:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:21:45 +0000   Tue, 24 Sep 2024 00:21:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:21:45 +0000   Tue, 24 Sep 2024 00:21:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:21:45 +0000   Tue, 24 Sep 2024 00:21:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-903000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc429afd38d84c639fc6da286d7cb211
	  System UUID:                cc429afd38d84c639fc6da286d7cb211
	  Boot ID:                    6097790d-d049-4f55-8710-e86c054db865
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-8w5vk                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 coredns-6d4b75cb6d-ttmgl                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-903000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-903000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-903000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-d5747                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-903000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-903000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-903000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-903000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-903000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-903000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-903000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-903000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-903000 event: Registered Node running-upgrade-903000 in Controller
	
	
	==> dmesg <==
	[  +1.787742] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.066767] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.061220] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.136266] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.071665] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.066676] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[Sep24 00:17] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[  +9.651722] systemd-fstab-generator[1928]: Ignoring "noauto" for root device
	[  +2.685193] systemd-fstab-generator[2211]: Ignoring "noauto" for root device
	[  +0.154121] systemd-fstab-generator[2245]: Ignoring "noauto" for root device
	[  +0.101514] systemd-fstab-generator[2256]: Ignoring "noauto" for root device
	[  +0.095244] systemd-fstab-generator[2269]: Ignoring "noauto" for root device
	[ +12.635573] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.235148] systemd-fstab-generator[3017]: Ignoring "noauto" for root device
	[  +0.074999] systemd-fstab-generator[3030]: Ignoring "noauto" for root device
	[  +0.083889] systemd-fstab-generator[3041]: Ignoring "noauto" for root device
	[  +0.076316] systemd-fstab-generator[3055]: Ignoring "noauto" for root device
	[  +2.413995] systemd-fstab-generator[3205]: Ignoring "noauto" for root device
	[  +3.471397] systemd-fstab-generator[3597]: Ignoring "noauto" for root device
	[  +2.109740] systemd-fstab-generator[4417]: Ignoring "noauto" for root device
	[ +17.579134] kauditd_printk_skb: 68 callbacks suppressed
	[Sep24 00:18] kauditd_printk_skb: 21 callbacks suppressed
	[Sep24 00:21] systemd-fstab-generator[12483]: Ignoring "noauto" for root device
	[  +5.625056] systemd-fstab-generator[13080]: Ignoring "noauto" for root device
	[  +0.470341] systemd-fstab-generator[13213]: Ignoring "noauto" for root device
	
	
	==> etcd [44b700080a96] <==
	{"level":"info","ts":"2024-09-24T00:21:40.907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-24T00:21:40.907Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-24T00:21:40.911Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T00:21:40.912Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T00:21:40.912Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T00:21:40.912Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-24T00:21:40.912Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-24T00:21:41.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-24T00:21:41.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-24T00:21:41.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-24T00:21:41.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T00:21:41.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-24T00:21:41.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-24T00:21:41.277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-24T00:21:41.277Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T00:21:41.278Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T00:21:41.278Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T00:21:41.278Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T00:21:41.278Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-903000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T00:21:41.278Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:21:41.279Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T00:21:41.284Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:21:41.284Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-24T00:21:41.286Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T00:21:41.286Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:26:02 up 9 min,  0 users,  load average: 0.58, 0.35, 0.16
	Linux running-upgrade-903000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [92defea7a2e0] <==
	I0924 00:21:42.426084       1 controller.go:611] quota admission added evaluator for: namespaces
	I0924 00:21:42.460042       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0924 00:21:42.460098       1 cache.go:39] Caches are synced for autoregister controller
	I0924 00:21:42.461797       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0924 00:21:42.461950       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0924 00:21:42.462097       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0924 00:21:42.462812       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0924 00:21:43.203916       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0924 00:21:43.363984       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0924 00:21:43.367199       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0924 00:21:43.367219       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0924 00:21:43.493910       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 00:21:43.505963       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0924 00:21:43.529122       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0924 00:21:43.531251       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0924 00:21:43.531624       1 controller.go:611] quota admission added evaluator for: endpoints
	I0924 00:21:43.532748       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 00:21:44.517734       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0924 00:21:45.109800       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0924 00:21:45.113025       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0924 00:21:45.134484       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0924 00:21:45.159100       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 00:21:58.087454       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0924 00:21:58.532955       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0924 00:22:00.298637       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [d1912ab1fefc] <==
	I0924 00:21:58.071686       1 shared_informer.go:262] Caches are synced for GC
	I0924 00:21:58.075858       1 shared_informer.go:262] Caches are synced for deployment
	I0924 00:21:58.076999       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0924 00:21:58.080407       1 shared_informer.go:262] Caches are synced for persistent volume
	I0924 00:21:58.081765       1 shared_informer.go:262] Caches are synced for PV protection
	I0924 00:21:58.081777       1 shared_informer.go:262] Caches are synced for stateful set
	I0924 00:21:58.081787       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0924 00:21:58.081898       1 shared_informer.go:262] Caches are synced for PVC protection
	I0924 00:21:58.083827       1 shared_informer.go:262] Caches are synced for endpoint
	I0924 00:21:58.087612       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0924 00:21:58.090729       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d5747"
	I0924 00:21:58.181128       1 shared_informer.go:262] Caches are synced for attach detach
	I0924 00:21:58.196769       1 shared_informer.go:262] Caches are synced for namespace
	I0924 00:21:58.231982       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0924 00:21:58.232185       1 shared_informer.go:262] Caches are synced for crt configmap
	I0924 00:21:58.238127       1 shared_informer.go:262] Caches are synced for resource quota
	I0924 00:21:58.268708       1 shared_informer.go:262] Caches are synced for service account
	I0924 00:21:58.283564       1 shared_informer.go:262] Caches are synced for resource quota
	I0924 00:21:58.331512       1 shared_informer.go:262] Caches are synced for HPA
	I0924 00:21:58.535543       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0924 00:21:58.703421       1 shared_informer.go:262] Caches are synced for garbage collector
	I0924 00:21:58.781765       1 shared_informer.go:262] Caches are synced for garbage collector
	I0924 00:21:58.781777       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0924 00:21:59.083641       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-ttmgl"
	I0924 00:21:59.088226       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-8w5vk"
	
	
	==> kube-proxy [dcc7c5ea88d5] <==
	I0924 00:22:00.286084       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0924 00:22:00.286394       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0924 00:22:00.286453       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0924 00:22:00.295574       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0924 00:22:00.295583       1 server_others.go:206] "Using iptables Proxier"
	I0924 00:22:00.295793       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0924 00:22:00.295996       1 server.go:661] "Version info" version="v1.24.1"
	I0924 00:22:00.296000       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:22:00.296278       1 config.go:317] "Starting service config controller"
	I0924 00:22:00.296327       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0924 00:22:00.296336       1 config.go:226] "Starting endpoint slice config controller"
	I0924 00:22:00.296337       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0924 00:22:00.297591       1 config.go:444] "Starting node config controller"
	I0924 00:22:00.297623       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0924 00:22:00.397373       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0924 00:22:00.397400       1 shared_informer.go:262] Caches are synced for service config
	I0924 00:22:00.397693       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [30d3a74c9d15] <==
	W0924 00:21:42.428705       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 00:21:42.428721       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0924 00:21:42.428741       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 00:21:42.428771       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0924 00:21:42.428804       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 00:21:42.428816       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0924 00:21:42.428836       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 00:21:42.428848       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0924 00:21:42.428888       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 00:21:42.428902       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0924 00:21:42.429061       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 00:21:42.429300       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0924 00:21:42.429375       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 00:21:42.429409       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0924 00:21:42.430439       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 00:21:42.430450       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 00:21:42.430471       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 00:21:42.430475       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0924 00:21:43.297560       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 00:21:43.297620       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0924 00:21:43.410770       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 00:21:43.410815       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0924 00:21:43.463039       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 00:21:43.463171       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0924 00:21:46.124211       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-09-24 00:16:44 UTC, ends at Tue 2024-09-24 00:26:02 UTC. --
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: I0924 00:21:58.177538   13086 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d52b211d-d74d-490b-bbae-353f304edd56-lib-modules\") pod \"kube-proxy-d5747\" (UID: \"d52b211d-d74d-490b-bbae-353f304edd56\") " pod="kube-system/kube-proxy-d5747"
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: I0924 00:21:58.177555   13086 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d52b211d-d74d-490b-bbae-353f304edd56-kube-proxy\") pod \"kube-proxy-d5747\" (UID: \"d52b211d-d74d-490b-bbae-353f304edd56\") " pod="kube-system/kube-proxy-d5747"
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: I0924 00:21:58.177565   13086 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgtx4\" (UniqueName: \"kubernetes.io/projected/d52b211d-d74d-490b-bbae-353f304edd56-kube-api-access-kgtx4\") pod \"kube-proxy-d5747\" (UID: \"d52b211d-d74d-490b-bbae-353f304edd56\") " pod="kube-system/kube-proxy-d5747"
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.181284   13086 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.181322   13086 projected.go:192] Error preparing data for projected volume kube-api-access-kktsl for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.181357   13086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/16057415-0d1b-4459-8414-57df30bf6315-kube-api-access-kktsl podName:16057415-0d1b-4459-8414-57df30bf6315 nodeName:}" failed. No retries permitted until 2024-09-24 00:21:58.681344121 +0000 UTC m=+13.581731951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kktsl" (UniqueName: "kubernetes.io/projected/16057415-0d1b-4459-8414-57df30bf6315-kube-api-access-kktsl") pod "storage-provisioner" (UID: "16057415-0d1b-4459-8414-57df30bf6315") : configmap "kube-root-ca.crt" not found
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.280973   13086 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.281059   13086 projected.go:192] Error preparing data for projected volume kube-api-access-kgtx4 for pod kube-system/kube-proxy-d5747: configmap "kube-root-ca.crt" not found
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.281108   13086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/d52b211d-d74d-490b-bbae-353f304edd56-kube-api-access-kgtx4 podName:d52b211d-d74d-490b-bbae-353f304edd56 nodeName:}" failed. No retries permitted until 2024-09-24 00:21:58.781098708 +0000 UTC m=+13.681486580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kgtx4" (UniqueName: "kubernetes.io/projected/d52b211d-d74d-490b-bbae-353f304edd56-kube-api-access-kgtx4") pod "kube-proxy-d5747" (UID: "d52b211d-d74d-490b-bbae-353f304edd56") : configmap "kube-root-ca.crt" not found
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.682270   13086 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.682293   13086 projected.go:192] Error preparing data for projected volume kube-api-access-kktsl for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.682322   13086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/16057415-0d1b-4459-8414-57df30bf6315-kube-api-access-kktsl podName:16057415-0d1b-4459-8414-57df30bf6315 nodeName:}" failed. No retries permitted until 2024-09-24 00:21:59.682311984 +0000 UTC m=+14.582699856 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kktsl" (UniqueName: "kubernetes.io/projected/16057415-0d1b-4459-8414-57df30bf6315-kube-api-access-kktsl") pod "storage-provisioner" (UID: "16057415-0d1b-4459-8414-57df30bf6315") : configmap "kube-root-ca.crt" not found
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.782765   13086 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.782854   13086 projected.go:192] Error preparing data for projected volume kube-api-access-kgtx4 for pod kube-system/kube-proxy-d5747: configmap "kube-root-ca.crt" not found
	Sep 24 00:21:58 running-upgrade-903000 kubelet[13086]: E0924 00:21:58.782902   13086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/d52b211d-d74d-490b-bbae-353f304edd56-kube-api-access-kgtx4 podName:d52b211d-d74d-490b-bbae-353f304edd56 nodeName:}" failed. No retries permitted until 2024-09-24 00:21:59.782892669 +0000 UTC m=+14.683280541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kgtx4" (UniqueName: "kubernetes.io/projected/d52b211d-d74d-490b-bbae-353f304edd56-kube-api-access-kgtx4") pod "kube-proxy-d5747" (UID: "d52b211d-d74d-490b-bbae-353f304edd56") : configmap "kube-root-ca.crt" not found
	Sep 24 00:21:59 running-upgrade-903000 kubelet[13086]: I0924 00:21:59.087190   13086 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 00:21:59 running-upgrade-903000 kubelet[13086]: I0924 00:21:59.091551   13086 topology_manager.go:200] "Topology Admit Handler"
	Sep 24 00:21:59 running-upgrade-903000 kubelet[13086]: I0924 00:21:59.186211   13086 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07fc4909-e1e3-4706-9cc5-2e5aa01d1d54-config-volume\") pod \"coredns-6d4b75cb6d-ttmgl\" (UID: \"07fc4909-e1e3-4706-9cc5-2e5aa01d1d54\") " pod="kube-system/coredns-6d4b75cb6d-ttmgl"
	Sep 24 00:21:59 running-upgrade-903000 kubelet[13086]: I0924 00:21:59.186237   13086 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c187ae3-5fdc-4d34-a65e-edace3ba608b-config-volume\") pod \"coredns-6d4b75cb6d-8w5vk\" (UID: \"1c187ae3-5fdc-4d34-a65e-edace3ba608b\") " pod="kube-system/coredns-6d4b75cb6d-8w5vk"
	Sep 24 00:21:59 running-upgrade-903000 kubelet[13086]: I0924 00:21:59.186293   13086 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxjzc\" (UniqueName: \"kubernetes.io/projected/1c187ae3-5fdc-4d34-a65e-edace3ba608b-kube-api-access-zxjzc\") pod \"coredns-6d4b75cb6d-8w5vk\" (UID: \"1c187ae3-5fdc-4d34-a65e-edace3ba608b\") " pod="kube-system/coredns-6d4b75cb6d-8w5vk"
	Sep 24 00:21:59 running-upgrade-903000 kubelet[13086]: I0924 00:21:59.186307   13086 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc99w\" (UniqueName: \"kubernetes.io/projected/07fc4909-e1e3-4706-9cc5-2e5aa01d1d54-kube-api-access-nc99w\") pod \"coredns-6d4b75cb6d-ttmgl\" (UID: \"07fc4909-e1e3-4706-9cc5-2e5aa01d1d54\") " pod="kube-system/coredns-6d4b75cb6d-ttmgl"
	Sep 24 00:22:00 running-upgrade-903000 kubelet[13086]: I0924 00:22:00.330979   13086 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0be392734229da23af8d79ebad29f39d49eab43dbde597e18a5f3b933e8374cc"
	Sep 24 00:22:00 running-upgrade-903000 kubelet[13086]: I0924 00:22:00.335754   13086 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3b5f7a8bf3260108b3fc202c8d9d474e495101c6464bba3e931ea4e2551537ab"
	Sep 24 00:25:47 running-upgrade-903000 kubelet[13086]: I0924 00:25:47.600112   13086 scope.go:110] "RemoveContainer" containerID="13581f2593f093e972cd4ec67ba22ddf7cf985b60f31365d90841f3b8883731e"
	Sep 24 00:25:47 running-upgrade-903000 kubelet[13086]: I0924 00:25:47.612061   13086 scope.go:110] "RemoveContainer" containerID="acf535e26be1df0f83b1b8364f1b2bfc3e21d92244c9b828d78359936df01a3b"
	
	
	==> storage-provisioner [360508e123ae] <==
	I0924 00:22:00.186480       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 00:22:00.212055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 00:22:00.212069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 00:22:00.218111       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 00:22:00.219300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-903000_14219066-b945-4e9e-9b2d-b00ec461833c!
	I0924 00:22:00.223441       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b8715bb0-0319-4c0d-bd19-131ef1f8b6f0", APIVersion:"v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-903000_14219066-b945-4e9e-9b2d-b00ec461833c became leader
	I0924 00:22:00.323240       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-903000_14219066-b945-4e9e-9b2d-b00ec461833c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-903000 -n running-upgrade-903000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-903000 -n running-upgrade-903000: exit status 2 (15.658341042s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-903000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-903000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-903000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-903000: (1.146293209s)
--- FAIL: TestRunningBinaryUpgrade (603.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-953000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-953000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.780500458s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-953000" primary control-plane node in "kubernetes-upgrade-953000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:19:16.332105    4437 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:19:16.332239    4437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:19:16.332243    4437 out.go:358] Setting ErrFile to fd 2...
	I0923 17:19:16.332245    4437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:19:16.332404    4437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:19:16.333469    4437 out.go:352] Setting JSON to false
	I0923 17:19:16.349972    4437 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2919,"bootTime":1727134237,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:19:16.350072    4437 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:19:16.356167    4437 out.go:177] * [kubernetes-upgrade-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:19:16.363047    4437 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:19:16.363085    4437 notify.go:220] Checking for updates...
	I0923 17:19:16.369014    4437 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:19:16.371954    4437 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:19:16.374858    4437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:19:16.377956    4437 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:19:16.381017    4437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:19:16.382754    4437 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:19:16.382814    4437 config.go:182] Loaded profile config "running-upgrade-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:19:16.382869    4437 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:19:16.387006    4437 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:19:16.393805    4437 start.go:297] selected driver: qemu2
	I0923 17:19:16.393810    4437 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:19:16.393816    4437 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:19:16.396035    4437 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:19:16.398965    4437 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:19:16.402052    4437 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 17:19:16.402068    4437 cni.go:84] Creating CNI manager for ""
	I0923 17:19:16.402088    4437 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 17:19:16.402111    4437 start.go:340] cluster config:
	{Name:kubernetes-upgrade-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:19:16.405594    4437 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:19:16.412959    4437 out.go:177] * Starting "kubernetes-upgrade-953000" primary control-plane node in "kubernetes-upgrade-953000" cluster
	I0923 17:19:16.416934    4437 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 17:19:16.416952    4437 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 17:19:16.416959    4437 cache.go:56] Caching tarball of preloaded images
	I0923 17:19:16.417018    4437 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:19:16.417023    4437 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 17:19:16.417086    4437 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/kubernetes-upgrade-953000/config.json ...
	I0923 17:19:16.417097    4437 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/kubernetes-upgrade-953000/config.json: {Name:mk03a70ce48b5aee017c6065a44768566902334a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:19:16.417443    4437 start.go:360] acquireMachinesLock for kubernetes-upgrade-953000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:19:16.417474    4437 start.go:364] duration metric: took 24.792µs to acquireMachinesLock for "kubernetes-upgrade-953000"
	I0923 17:19:16.417488    4437 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:19:16.417512    4437 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:19:16.421004    4437 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:19:16.436701    4437 start.go:159] libmachine.API.Create for "kubernetes-upgrade-953000" (driver="qemu2")
	I0923 17:19:16.436727    4437 client.go:168] LocalClient.Create starting
	I0923 17:19:16.436797    4437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:19:16.436826    4437 main.go:141] libmachine: Decoding PEM data...
	I0923 17:19:16.436836    4437 main.go:141] libmachine: Parsing certificate...
	I0923 17:19:16.436877    4437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:19:16.436903    4437 main.go:141] libmachine: Decoding PEM data...
	I0923 17:19:16.436910    4437 main.go:141] libmachine: Parsing certificate...
	I0923 17:19:16.437368    4437 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:19:16.631422    4437 main.go:141] libmachine: Creating SSH key...
	I0923 17:19:16.695575    4437 main.go:141] libmachine: Creating Disk image...
	I0923 17:19:16.695582    4437 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:19:16.695821    4437 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2
	I0923 17:19:16.705156    4437 main.go:141] libmachine: STDOUT: 
	I0923 17:19:16.705180    4437 main.go:141] libmachine: STDERR: 
	I0923 17:19:16.705251    4437 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2 +20000M
	I0923 17:19:16.713198    4437 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:19:16.713215    4437 main.go:141] libmachine: STDERR: 
	I0923 17:19:16.713237    4437 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2
	I0923 17:19:16.713243    4437 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:19:16.713260    4437 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:19:16.713285    4437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:2a:7f:a1:65:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2
	I0923 17:19:16.715021    4437 main.go:141] libmachine: STDOUT: 
	I0923 17:19:16.715037    4437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:19:16.715056    4437 client.go:171] duration metric: took 278.324958ms to LocalClient.Create
	I0923 17:19:18.717268    4437 start.go:128] duration metric: took 2.299740042s to createHost
	I0923 17:19:18.717366    4437 start.go:83] releasing machines lock for "kubernetes-upgrade-953000", held for 2.299897875s
	W0923 17:19:18.717450    4437 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:19:18.728382    4437 out.go:177] * Deleting "kubernetes-upgrade-953000" in qemu2 ...
	W0923 17:19:18.772096    4437 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:19:18.772121    4437 start.go:729] Will try again in 5 seconds ...
	I0923 17:19:23.774154    4437 start.go:360] acquireMachinesLock for kubernetes-upgrade-953000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:19:23.774354    4437 start.go:364] duration metric: took 176.375µs to acquireMachinesLock for "kubernetes-upgrade-953000"
	I0923 17:19:23.774399    4437 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:19:23.774445    4437 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:19:23.784658    4437 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:19:23.800169    4437 start.go:159] libmachine.API.Create for "kubernetes-upgrade-953000" (driver="qemu2")
	I0923 17:19:23.800197    4437 client.go:168] LocalClient.Create starting
	I0923 17:19:23.800269    4437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:19:23.800308    4437 main.go:141] libmachine: Decoding PEM data...
	I0923 17:19:23.800316    4437 main.go:141] libmachine: Parsing certificate...
	I0923 17:19:23.800346    4437 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:19:23.800372    4437 main.go:141] libmachine: Decoding PEM data...
	I0923 17:19:23.800377    4437 main.go:141] libmachine: Parsing certificate...
	I0923 17:19:23.800761    4437 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:19:23.963735    4437 main.go:141] libmachine: Creating SSH key...
	I0923 17:19:24.012429    4437 main.go:141] libmachine: Creating Disk image...
	I0923 17:19:24.012438    4437 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:19:24.012664    4437 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2
	I0923 17:19:24.021912    4437 main.go:141] libmachine: STDOUT: 
	I0923 17:19:24.021932    4437 main.go:141] libmachine: STDERR: 
	I0923 17:19:24.021992    4437 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2 +20000M
	I0923 17:19:24.029781    4437 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:19:24.029802    4437 main.go:141] libmachine: STDERR: 
	I0923 17:19:24.029815    4437 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2
	I0923 17:19:24.029822    4437 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:19:24.029832    4437 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:19:24.029867    4437 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:58:05:55:7b:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2
	I0923 17:19:24.031535    4437 main.go:141] libmachine: STDOUT: 
	I0923 17:19:24.031547    4437 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:19:24.031563    4437 client.go:171] duration metric: took 231.364209ms to LocalClient.Create
	I0923 17:19:26.033921    4437 start.go:128] duration metric: took 2.259460167s to createHost
	I0923 17:19:26.034010    4437 start.go:83] releasing machines lock for "kubernetes-upgrade-953000", held for 2.259652083s
	W0923 17:19:26.034299    4437 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:19:26.050815    4437 out.go:201] 
	W0923 17:19:26.054130    4437 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:19:26.054158    4437 out.go:270] * 
	* 
	W0923 17:19:26.056894    4437 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:19:26.069996    4437 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-953000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-953000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-953000: (2.9105855s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-953000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-953000 status --format={{.Host}}: exit status 7 (56.304542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-953000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-953000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.186245958s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-953000" primary control-plane node in "kubernetes-upgrade-953000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:19:29.083606    4475 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:19:29.083752    4475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:19:29.083756    4475 out.go:358] Setting ErrFile to fd 2...
	I0923 17:19:29.083758    4475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:19:29.083900    4475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:19:29.084940    4475 out.go:352] Setting JSON to false
	I0923 17:19:29.101342    4475 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2932,"bootTime":1727134237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:19:29.101454    4475 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:19:29.106980    4475 out.go:177] * [kubernetes-upgrade-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:19:29.113925    4475 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:19:29.113952    4475 notify.go:220] Checking for updates...
	I0923 17:19:29.120893    4475 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:19:29.123901    4475 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:19:29.126925    4475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:19:29.129766    4475 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:19:29.132901    4475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:19:29.136162    4475 config.go:182] Loaded profile config "kubernetes-upgrade-953000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 17:19:29.136402    4475 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:19:29.139882    4475 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:19:29.146902    4475 start.go:297] selected driver: qemu2
	I0923 17:19:29.146908    4475 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:19:29.146952    4475 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:19:29.149062    4475 cni.go:84] Creating CNI manager for ""
	I0923 17:19:29.149095    4475 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:19:29.149115    4475 start.go:340] cluster config:
	{Name:kubernetes-upgrade-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-953000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:19:29.152296    4475 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:19:29.159836    4475 out.go:177] * Starting "kubernetes-upgrade-953000" primary control-plane node in "kubernetes-upgrade-953000" cluster
	I0923 17:19:29.163917    4475 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:19:29.163942    4475 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:19:29.163948    4475 cache.go:56] Caching tarball of preloaded images
	I0923 17:19:29.164000    4475 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:19:29.164006    4475 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:19:29.164054    4475 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/kubernetes-upgrade-953000/config.json ...
	I0923 17:19:29.164551    4475 start.go:360] acquireMachinesLock for kubernetes-upgrade-953000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:19:29.164577    4475 start.go:364] duration metric: took 19.5µs to acquireMachinesLock for "kubernetes-upgrade-953000"
	I0923 17:19:29.164585    4475 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:19:29.164590    4475 fix.go:54] fixHost starting: 
	I0923 17:19:29.164708    4475 fix.go:112] recreateIfNeeded on kubernetes-upgrade-953000: state=Stopped err=<nil>
	W0923 17:19:29.164715    4475 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:19:29.172000    4475 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-953000" ...
	I0923 17:19:29.175926    4475 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:19:29.175969    4475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:58:05:55:7b:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2
	I0923 17:19:29.177710    4475 main.go:141] libmachine: STDOUT: 
	I0923 17:19:29.177726    4475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:19:29.177751    4475 fix.go:56] duration metric: took 13.160916ms for fixHost
	I0923 17:19:29.177755    4475 start.go:83] releasing machines lock for "kubernetes-upgrade-953000", held for 13.175041ms
	W0923 17:19:29.177761    4475 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:19:29.177796    4475 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:19:29.177800    4475 start.go:729] Will try again in 5 seconds ...
	I0923 17:19:34.180049    4475 start.go:360] acquireMachinesLock for kubernetes-upgrade-953000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:19:34.180453    4475 start.go:364] duration metric: took 320.584µs to acquireMachinesLock for "kubernetes-upgrade-953000"
	I0923 17:19:34.180533    4475 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:19:34.180553    4475 fix.go:54] fixHost starting: 
	I0923 17:19:34.181250    4475 fix.go:112] recreateIfNeeded on kubernetes-upgrade-953000: state=Stopped err=<nil>
	W0923 17:19:34.181277    4475 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:19:34.189672    4475 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-953000" ...
	I0923 17:19:34.193628    4475 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:19:34.193877    4475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:58:05:55:7b:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubernetes-upgrade-953000/disk.qcow2
	I0923 17:19:34.203668    4475 main.go:141] libmachine: STDOUT: 
	I0923 17:19:34.203726    4475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:19:34.203809    4475 fix.go:56] duration metric: took 23.257166ms for fixHost
	I0923 17:19:34.203832    4475 start.go:83] releasing machines lock for "kubernetes-upgrade-953000", held for 23.355625ms
	W0923 17:19:34.204023    4475 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-953000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-953000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:19:34.212603    4475 out.go:201] 
	W0923 17:19:34.215612    4475 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:19:34.215646    4475 out.go:270] * 
	* 
	W0923 17:19:34.217743    4475 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:19:34.227547    4475 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-953000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-953000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-953000 version --output=json: exit status 1 (64.974041ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-953000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-23 17:19:34.307348 -0700 PDT m=+2568.274109210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-953000 -n kubernetes-upgrade-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-953000 -n kubernetes-upgrade-953000: exit status 7 (33.873541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-953000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-953000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-953000
--- FAIL: TestKubernetesUpgrade (18.12s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.46s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19696
- KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3254015293/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.2s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
E0923 17:15:39.284943    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19696
- KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current891106938/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.526200020 start -p stopped-upgrade-180000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.526200020 start -p stopped-upgrade-180000 --memory=2200 --vm-driver=qemu2 : (39.429934333s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.526200020 -p stopped-upgrade-180000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.526200020 -p stopped-upgrade-180000 stop: (12.09055725s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-180000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0923 17:20:32.845492    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:20:39.349083    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:25:32.843228    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/functional-496000/client.crt: no such file or directory" logger="UnhandledError"
E0923 17:25:39.347077    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-180000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.773542042s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-180000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-180000" primary control-plane node in "stopped-upgrade-180000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-180000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:20:26.937184    4508 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:20:26.937326    4508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:20:26.937330    4508 out.go:358] Setting ErrFile to fd 2...
	I0923 17:20:26.937332    4508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:20:26.937495    4508 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:20:26.938705    4508 out.go:352] Setting JSON to false
	I0923 17:20:26.958502    4508 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2989,"bootTime":1727134237,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:20:26.958581    4508 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:20:26.963511    4508 out.go:177] * [stopped-upgrade-180000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:20:26.971430    4508 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:20:26.971514    4508 notify.go:220] Checking for updates...
	I0923 17:20:26.977439    4508 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:20:26.980492    4508 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:20:26.981612    4508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:20:26.984458    4508 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:20:26.987446    4508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:20:26.990729    4508 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:20:26.993389    4508 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 17:20:26.996418    4508 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:20:27.000394    4508 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:20:27.007411    4508 start.go:297] selected driver: qemu2
	I0923 17:20:27.007417    4508 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 17:20:27.007460    4508 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:20:27.010002    4508 cni.go:84] Creating CNI manager for ""
	I0923 17:20:27.010039    4508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:20:27.010059    4508 start.go:340] cluster config:
	{Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 17:20:27.010109    4508 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:20:27.017444    4508 out.go:177] * Starting "stopped-upgrade-180000" primary control-plane node in "stopped-upgrade-180000" cluster
	I0923 17:20:27.021466    4508 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 17:20:27.021480    4508 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0923 17:20:27.021488    4508 cache.go:56] Caching tarball of preloaded images
	I0923 17:20:27.021534    4508 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:20:27.021540    4508 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0923 17:20:27.021588    4508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/config.json ...
	I0923 17:20:27.021962    4508 start.go:360] acquireMachinesLock for stopped-upgrade-180000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:20:27.021990    4508 start.go:364] duration metric: took 22.709µs to acquireMachinesLock for "stopped-upgrade-180000"
	I0923 17:20:27.022000    4508 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:20:27.022004    4508 fix.go:54] fixHost starting: 
	I0923 17:20:27.022114    4508 fix.go:112] recreateIfNeeded on stopped-upgrade-180000: state=Stopped err=<nil>
	W0923 17:20:27.022123    4508 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:20:27.030437    4508 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-180000" ...
	I0923 17:20:27.033309    4508 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:20:27.033380    4508 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50494-:22,hostfwd=tcp::50495-:2376,hostname=stopped-upgrade-180000 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/disk.qcow2
	I0923 17:20:27.077996    4508 main.go:141] libmachine: STDOUT: 
	I0923 17:20:27.078022    4508 main.go:141] libmachine: STDERR: 
	I0923 17:20:27.078028    4508 main.go:141] libmachine: Waiting for VM to start (ssh -p 50494 docker@127.0.0.1)...
	I0923 17:20:47.137261    4508 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/config.json ...
	I0923 17:20:47.138461    4508 machine.go:93] provisionDockerMachine start ...
	I0923 17:20:47.138610    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.139057    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.139074    4508 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 17:20:47.228457    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 17:20:47.228492    4508 buildroot.go:166] provisioning hostname "stopped-upgrade-180000"
	I0923 17:20:47.228624    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.228871    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.228883    4508 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-180000 && echo "stopped-upgrade-180000" | sudo tee /etc/hostname
	I0923 17:20:47.310966    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-180000
	
	I0923 17:20:47.311062    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.311233    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.311246    4508 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-180000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-180000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-180000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 17:20:47.385402    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 17:20:47.385415    4508 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19696-1109/.minikube CaCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19696-1109/.minikube}
	I0923 17:20:47.385423    4508 buildroot.go:174] setting up certificates
	I0923 17:20:47.385429    4508 provision.go:84] configureAuth start
	I0923 17:20:47.385433    4508 provision.go:143] copyHostCerts
	I0923 17:20:47.385521    4508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.pem, removing ...
	I0923 17:20:47.385530    4508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.pem
	I0923 17:20:47.385771    4508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.pem (1082 bytes)
	I0923 17:20:47.385988    4508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19696-1109/.minikube/cert.pem, removing ...
	I0923 17:20:47.385993    4508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19696-1109/.minikube/cert.pem
	I0923 17:20:47.386064    4508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/cert.pem (1123 bytes)
	I0923 17:20:47.386196    4508 exec_runner.go:144] found /Users/jenkins/minikube-integration/19696-1109/.minikube/key.pem, removing ...
	I0923 17:20:47.386200    4508 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19696-1109/.minikube/key.pem
	I0923 17:20:47.386274    4508 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19696-1109/.minikube/key.pem (1679 bytes)
	I0923 17:20:47.386381    4508 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-180000 san=[127.0.0.1 localhost minikube stopped-upgrade-180000]
	I0923 17:20:47.480886    4508 provision.go:177] copyRemoteCerts
	I0923 17:20:47.480936    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 17:20:47.480944    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0923 17:20:47.516374    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 17:20:47.523265    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0923 17:20:47.529936    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 17:20:47.537081    4508 provision.go:87] duration metric: took 151.642667ms to configureAuth
	I0923 17:20:47.537090    4508 buildroot.go:189] setting minikube options for container-runtime
	I0923 17:20:47.537189    4508 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:20:47.537233    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.537316    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.537323    4508 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 17:20:47.603984    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 17:20:47.603995    4508 buildroot.go:70] root file system type: tmpfs
	I0923 17:20:47.604057    4508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 17:20:47.604121    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.604236    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.604271    4508 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 17:20:47.675030    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 17:20:47.675100    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:47.675224    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:47.675232    4508 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 17:20:48.052414    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 17:20:48.052428    4508 machine.go:96] duration metric: took 913.956333ms to provisionDockerMachine
	I0923 17:20:48.052434    4508 start.go:293] postStartSetup for "stopped-upgrade-180000" (driver="qemu2")
	I0923 17:20:48.052441    4508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 17:20:48.052505    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 17:20:48.052514    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0923 17:20:48.088607    4508 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 17:20:48.089797    4508 info.go:137] Remote host: Buildroot 2021.02.12
	I0923 17:20:48.089805    4508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19696-1109/.minikube/addons for local assets ...
	I0923 17:20:48.089892    4508 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19696-1109/.minikube/files for local assets ...
	I0923 17:20:48.090019    4508 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem -> 15962.pem in /etc/ssl/certs
	I0923 17:20:48.090160    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 17:20:48.092955    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem --> /etc/ssl/certs/15962.pem (1708 bytes)
	I0923 17:20:48.100034    4508 start.go:296] duration metric: took 47.595417ms for postStartSetup
	I0923 17:20:48.100049    4508 fix.go:56] duration metric: took 21.078193666s for fixHost
	I0923 17:20:48.100085    4508 main.go:141] libmachine: Using SSH client type: native
	I0923 17:20:48.100186    4508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1012a1c00] 0x1012a4440 <nil>  [] 0s} localhost 50494 <nil> <nil>}
	I0923 17:20:48.100191    4508 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 17:20:48.166813    4508 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727137248.321797546
	
	I0923 17:20:48.166824    4508 fix.go:216] guest clock: 1727137248.321797546
	I0923 17:20:48.166828    4508 fix.go:229] Guest: 2024-09-23 17:20:48.321797546 -0700 PDT Remote: 2024-09-23 17:20:48.100051 -0700 PDT m=+21.184851918 (delta=221.746546ms)
	I0923 17:20:48.166841    4508 fix.go:200] guest clock delta is within tolerance: 221.746546ms
	I0923 17:20:48.166844    4508 start.go:83] releasing machines lock for "stopped-upgrade-180000", held for 21.144998041s
	I0923 17:20:48.166918    4508 ssh_runner.go:195] Run: cat /version.json
	I0923 17:20:48.166928    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0923 17:20:48.166918    4508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 17:20:48.166955    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	W0923 17:20:48.167530    4508 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50494: connect: connection refused
	I0923 17:20:48.167555    4508 retry.go:31] will retry after 373.081313ms: dial tcp [::1]:50494: connect: connection refused
	W0923 17:20:48.201794    4508 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0923 17:20:48.201845    4508 ssh_runner.go:195] Run: systemctl --version
	I0923 17:20:48.203885    4508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 17:20:48.205532    4508 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 17:20:48.205561    4508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0923 17:20:48.208830    4508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0923 17:20:48.213653    4508 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 17:20:48.213667    4508 start.go:495] detecting cgroup driver to use...
	I0923 17:20:48.213756    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 17:20:48.220887    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0923 17:20:48.224111    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 17:20:48.227074    4508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 17:20:48.227105    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 17:20:48.229876    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 17:20:48.232953    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 17:20:48.236389    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 17:20:48.239333    4508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 17:20:48.242341    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 17:20:48.245525    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 17:20:48.248910    4508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 17:20:48.252027    4508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 17:20:48.254575    4508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 17:20:48.257420    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:48.335540    4508 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 17:20:48.346128    4508 start.go:495] detecting cgroup driver to use...
	I0923 17:20:48.346225    4508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 17:20:48.352309    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 17:20:48.357310    4508 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 17:20:48.364870    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 17:20:48.369690    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 17:20:48.374385    4508 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 17:20:48.424891    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 17:20:48.430387    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 17:20:48.436413    4508 ssh_runner.go:195] Run: which cri-dockerd
	I0923 17:20:48.437688    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 17:20:48.440478    4508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0923 17:20:48.445618    4508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 17:20:48.527396    4508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 17:20:48.607210    4508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 17:20:48.607264    4508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 17:20:48.612237    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:48.681883    4508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 17:20:49.797614    4508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.115721292s)
	I0923 17:20:49.797680    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 17:20:49.802246    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 17:20:49.807085    4508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 17:20:49.875446    4508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 17:20:49.957519    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:50.027199    4508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 17:20:50.033160    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 17:20:50.037793    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:50.115351    4508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 17:20:50.153860    4508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 17:20:50.153968    4508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 17:20:50.156087    4508 start.go:563] Will wait 60s for crictl version
	I0923 17:20:50.156144    4508 ssh_runner.go:195] Run: which crictl
	I0923 17:20:50.157487    4508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 17:20:50.172490    4508 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0923 17:20:50.172583    4508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 17:20:50.188396    4508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 17:20:50.209615    4508 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0923 17:20:50.209701    4508 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0923 17:20:50.211088    4508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 17:20:50.214566    4508 kubeadm.go:883] updating cluster {Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0923 17:20:50.214616    4508 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 17:20:50.214667    4508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 17:20:50.224870    4508 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 17:20:50.224879    4508 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 17:20:50.224928    4508 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 17:20:50.228668    4508 ssh_runner.go:195] Run: which lz4
	I0923 17:20:50.229975    4508 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 17:20:50.231370    4508 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 17:20:50.231378    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0923 17:20:51.209091    4508 docker.go:649] duration metric: took 979.165708ms to copy over tarball
	I0923 17:20:51.209168    4508 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 17:20:52.373462    4508 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.164286791s)
	I0923 17:20:52.373476    4508 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 17:20:52.389548    4508 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 17:20:52.392983    4508 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0923 17:20:52.398104    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:52.480651    4508 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 17:20:54.105747    4508 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.625087916s)
	I0923 17:20:54.105858    4508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 17:20:54.125262    4508 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 17:20:54.125272    4508 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 17:20:54.125277    4508 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 17:20:54.130599    4508 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:20:54.132655    4508 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:20:54.134140    4508 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:20:54.134202    4508 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 17:20:54.136274    4508 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:20:54.136320    4508 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 17:20:54.137867    4508 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:20:54.138311    4508 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 17:20:54.139359    4508 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 17:20:54.139886    4508 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:20:54.141320    4508 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:20:54.141497    4508 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:20:54.142948    4508 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:20:54.143090    4508 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:20:54.144136    4508 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:20:54.144821    4508 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:20:54.470043    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:20:54.480186    4508 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0923 17:20:54.480218    4508 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:20:54.480283    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0923 17:20:54.490378    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0923 17:20:54.500931    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0923 17:20:54.510816    4508 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0923 17:20:54.510846    4508 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0923 17:20:54.510914    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0923 17:20:54.520967    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0923 17:20:54.522471    4508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0923 17:20:54.524086    4508 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0923 17:20:54.524098    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0923 17:20:54.532376    4508 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0923 17:20:54.532384    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0923 17:20:54.547515    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0923 17:20:54.562422    4508 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0923 17:20:54.562579    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:20:54.563329    4508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0923 17:20:54.563365    4508 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0923 17:20:54.563381    4508 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0923 17:20:54.563417    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0923 17:20:54.576632    4508 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0923 17:20:54.576651    4508 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:20:54.576698    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0923 17:20:54.576718    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 17:20:54.576803    4508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0923 17:20:54.582788    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:20:54.591833    4508 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0923 17:20:54.591841    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 17:20:54.591868    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0923 17:20:54.592007    4508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0923 17:20:54.600393    4508 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0923 17:20:54.600422    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0923 17:20:54.600541    4508 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0923 17:20:54.600560    4508 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:20:54.600612    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 17:20:54.629315    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0923 17:20:54.640896    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:20:54.641862    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:20:54.675884    4508 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0923 17:20:54.675909    4508 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:20:54.675982    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0923 17:20:54.691590    4508 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0923 17:20:54.691604    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0923 17:20:54.707439    4508 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0923 17:20:54.707465    4508 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:20:54.707541    4508 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0923 17:20:54.738037    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0923 17:20:54.838198    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0923 17:20:54.838208    4508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0923 17:20:54.917345    4508 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0923 17:20:54.917361    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0923 17:20:54.993572    4508 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0923 17:20:54.993717    4508 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:20:55.089961    4508 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0923 17:20:55.089989    4508 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:20:55.090043    4508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0923 17:20:55.090064    4508 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:20:55.103955    4508 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0923 17:20:55.104097    4508 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0923 17:20:55.105447    4508 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0923 17:20:55.105458    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0923 17:20:55.133466    4508 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0923 17:20:55.133480    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0923 17:20:55.377767    4508 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0923 17:20:55.377811    4508 cache_images.go:92] duration metric: took 1.252525833s to LoadCachedImages
	W0923 17:20:55.377852    4508 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0923 17:20:55.377857    4508 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0923 17:20:55.377898    4508 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-180000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 17:20:55.377973    4508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 17:20:55.391992    4508 cni.go:84] Creating CNI manager for ""
	I0923 17:20:55.392003    4508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:20:55.392008    4508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 17:20:55.392017    4508 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-180000 NodeName:stopped-upgrade-180000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 17:20:55.392084    4508 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-180000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 17:20:55.392150    4508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0923 17:20:55.395014    4508 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 17:20:55.395050    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 17:20:55.397568    4508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0923 17:20:55.402645    4508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 17:20:55.407348    4508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0923 17:20:55.412446    4508 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0923 17:20:55.413560    4508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 17:20:55.417113    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:20:55.499150    4508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 17:20:55.505213    4508 certs.go:68] Setting up /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000 for IP: 10.0.2.15
	I0923 17:20:55.505222    4508 certs.go:194] generating shared ca certs ...
	I0923 17:20:55.505231    4508 certs.go:226] acquiring lock for ca certs: {Name:mk0bd8a887d4e289277fd6cf7c9ed1b474966431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:20:55.505405    4508 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.key
	I0923 17:20:55.505464    4508 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.key
	I0923 17:20:55.505470    4508 certs.go:256] generating profile certs ...
	I0923 17:20:55.505546    4508 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/client.key
	I0923 17:20:55.505562    4508 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156
	I0923 17:20:55.505573    4508 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0923 17:20:55.625317    4508 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 ...
	I0923 17:20:55.625331    4508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156: {Name:mk018920694709d8ee675a242cd091f45c8350f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:20:55.633285    4508 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156 ...
	I0923 17:20:55.633290    4508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156: {Name:mk85fedbb527994c11d5c54319fe082e5f6febf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:20:55.633449    4508 certs.go:381] copying /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt.11eb3156 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt
	I0923 17:20:55.634860    4508 certs.go:385] copying /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key.11eb3156 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key
	I0923 17:20:55.635052    4508 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/proxy-client.key
	I0923 17:20:55.635191    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/1596.pem (1338 bytes)
	W0923 17:20:55.635223    4508 certs.go:480] ignoring /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/1596_empty.pem, impossibly tiny 0 bytes
	I0923 17:20:55.635230    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 17:20:55.635253    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem (1082 bytes)
	I0923 17:20:55.635275    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem (1123 bytes)
	I0923 17:20:55.635294    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/key.pem (1679 bytes)
	I0923 17:20:55.635332    4508 certs.go:484] found cert: /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem (1708 bytes)
	I0923 17:20:55.635685    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 17:20:55.642518    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 17:20:55.649461    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 17:20:55.656980    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 17:20:55.664459    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 17:20:55.671622    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 17:20:55.678460    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 17:20:55.685406    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 17:20:55.692885    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 17:20:55.699664    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/1596.pem --> /usr/share/ca-certificates/1596.pem (1338 bytes)
	I0923 17:20:55.706360    4508 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/ssl/certs/15962.pem --> /usr/share/ca-certificates/15962.pem (1708 bytes)
	I0923 17:20:55.713195    4508 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 17:20:55.718378    4508 ssh_runner.go:195] Run: openssl version
	I0923 17:20:55.720205    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1596.pem && ln -fs /usr/share/ca-certificates/1596.pem /etc/ssl/certs/1596.pem"
	I0923 17:20:55.723190    4508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1596.pem
	I0923 17:20:55.724638    4508 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 23:53 /usr/share/ca-certificates/1596.pem
	I0923 17:20:55.724661    4508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1596.pem
	I0923 17:20:55.726454    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1596.pem /etc/ssl/certs/51391683.0"
	I0923 17:20:55.729805    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15962.pem && ln -fs /usr/share/ca-certificates/15962.pem /etc/ssl/certs/15962.pem"
	I0923 17:20:55.733138    4508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15962.pem
	I0923 17:20:55.734659    4508 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 23:53 /usr/share/ca-certificates/15962.pem
	I0923 17:20:55.734687    4508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15962.pem
	I0923 17:20:55.736369    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15962.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 17:20:55.739115    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 17:20:55.741907    4508 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 17:20:55.743286    4508 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0923 17:20:55.743312    4508 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 17:20:55.745067    4508 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 17:20:55.748217    4508 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 17:20:55.749635    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 17:20:55.751559    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 17:20:55.753393    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 17:20:55.755398    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 17:20:55.757171    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 17:20:55.759214    4508 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 17:20:55.761105    4508 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-180000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50528 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 17:20:55.761186    4508 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 17:20:55.771717    4508 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 17:20:55.774671    4508 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 17:20:55.774683    4508 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 17:20:55.774710    4508 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 17:20:55.777507    4508 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 17:20:55.777825    4508 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-180000" does not appear in /Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:20:55.777920    4508 kubeconfig.go:62] /Users/jenkins/minikube-integration/19696-1109/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-180000" cluster setting kubeconfig missing "stopped-upgrade-180000" context setting]
	I0923 17:20:55.778131    4508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/kubeconfig: {Name:mk52c76fc8ff944a7bcab52e821c0354dabfa3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:20:55.778834    4508 kapi.go:59] client config for stopped-upgrade-180000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/client.key", CAFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10287a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 17:20:55.779188    4508 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 17:20:55.781855    4508 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-180000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0923 17:20:55.781861    4508 kubeadm.go:1160] stopping kube-system containers ...
	I0923 17:20:55.781909    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 17:20:55.792510    4508 docker.go:483] Stopping containers: [d197e6aae6df d90f22288f74 f23fdf4a3c0e d3412f726c41 bef04daa8846 c5580dec55db c76c65ec3945 888ebeffd7fc]
	I0923 17:20:55.792591    4508 ssh_runner.go:195] Run: docker stop d197e6aae6df d90f22288f74 f23fdf4a3c0e d3412f726c41 bef04daa8846 c5580dec55db c76c65ec3945 888ebeffd7fc
	I0923 17:20:55.803290    4508 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 17:20:55.808668    4508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 17:20:55.811607    4508 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 17:20:55.811613    4508 kubeadm.go:157] found existing configuration files:
	
	I0923 17:20:55.811638    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf
	I0923 17:20:55.814062    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 17:20:55.814089    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 17:20:55.817055    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf
	I0923 17:20:55.820076    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 17:20:55.820101    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 17:20:55.822588    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf
	I0923 17:20:55.825211    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 17:20:55.825239    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 17:20:55.828338    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf
	I0923 17:20:55.831074    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 17:20:55.831115    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 17:20:55.833815    4508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 17:20:55.837320    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:20:55.859490    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:20:56.344915    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:20:56.481378    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:20:56.503403    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 17:20:56.526066    4508 api_server.go:52] waiting for apiserver process to appear ...
	I0923 17:20:56.526144    4508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:20:57.028292    4508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:20:57.528242    4508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:20:57.536398    4508 api_server.go:72] duration metric: took 1.010333083s to wait for apiserver process to appear ...
	I0923 17:20:57.536412    4508 api_server.go:88] waiting for apiserver healthz status ...
	I0923 17:20:57.536422    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:02.538504    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:02.538530    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:07.538760    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:07.538824    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:12.539264    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:12.539289    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:17.539754    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:17.539800    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:22.540453    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:22.540480    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:27.541538    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:27.541593    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:32.542824    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:32.542880    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:37.544364    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:37.544407    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:42.546335    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:42.546373    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:47.548602    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:47.548656    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:52.551023    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:52.551065    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:21:57.553294    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:21:57.553482    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:21:57.564291    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:21:57.564379    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:21:57.574428    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:21:57.574517    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:21:57.585379    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:21:57.585483    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:21:57.595679    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:21:57.595768    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:21:57.606052    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:21:57.606131    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:21:57.616758    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:21:57.616841    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:21:57.631288    4508 logs.go:276] 0 containers: []
	W0923 17:21:57.631301    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:21:57.631375    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:21:57.641703    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:21:57.641724    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:21:57.641729    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:21:57.716636    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:21:57.716648    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:21:57.731237    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:21:57.731255    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:21:57.773418    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:21:57.773429    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:21:57.800392    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:21:57.800407    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:21:57.804700    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:21:57.804710    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:21:57.819575    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:21:57.819584    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:21:57.830997    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:21:57.831012    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:21:57.842529    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:21:57.842542    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:21:57.859968    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:21:57.859983    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:21:57.899651    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:21:57.899663    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:21:57.911331    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:21:57.911341    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:21:57.925089    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:21:57.925102    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:21:57.940538    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:21:57.940554    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:21:57.953537    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:21:57.953551    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:21:57.965651    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:21:57.965661    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:21:57.984413    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:21:57.984437    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:00.498836    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:05.501066    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:05.501213    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:05.513188    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:05.513283    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:05.524073    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:05.524171    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:05.534695    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:05.534779    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:05.546710    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:05.546794    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:05.557318    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:05.557394    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:05.568160    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:05.568248    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:05.580769    4508 logs.go:276] 0 containers: []
	W0923 17:22:05.580781    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:05.580856    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:05.591303    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:05.591321    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:05.591326    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:05.628374    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:05.628382    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:05.642205    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:05.642215    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:05.653688    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:05.653701    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:05.664443    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:05.664457    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:05.678252    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:05.678263    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:05.689643    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:05.689653    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:05.706616    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:05.706630    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:05.724416    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:05.724425    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:05.736610    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:05.736620    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:05.748144    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:05.748155    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:05.783976    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:05.783992    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:05.799623    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:05.799633    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:05.811831    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:05.811842    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:05.824245    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:05.824257    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:05.828594    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:05.828601    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:05.866592    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:05.866602    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:08.394566    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:13.396074    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:13.396364    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:13.417886    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:13.418005    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:13.432253    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:13.432353    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:13.445274    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:13.445355    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:13.456435    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:13.456542    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:13.466916    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:13.466993    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:13.477866    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:13.477951    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:13.487614    4508 logs.go:276] 0 containers: []
	W0923 17:22:13.487636    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:13.487709    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:13.498275    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:13.498296    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:13.498302    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:13.516040    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:13.516055    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:13.531007    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:13.531021    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:13.542411    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:13.542423    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:13.553594    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:13.553606    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:13.566008    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:13.566023    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:13.604863    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:13.604877    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:13.608954    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:13.608961    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:13.622560    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:13.622575    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:13.634519    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:13.634533    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:13.648923    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:13.648932    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:13.661641    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:13.661656    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:13.700095    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:13.700104    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:13.738290    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:13.738306    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:13.750547    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:13.750558    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:13.771399    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:13.771413    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:13.783351    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:13.783361    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:16.309030    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:21.310538    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:21.310713    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:21.322030    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:21.322114    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:21.333146    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:21.333221    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:21.344707    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:21.344813    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:21.355184    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:21.355271    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:21.365973    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:21.366063    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:21.376259    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:21.376333    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:21.386699    4508 logs.go:276] 0 containers: []
	W0923 17:22:21.386712    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:21.386776    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:21.397562    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:21.397580    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:21.397586    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:21.416997    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:21.417007    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:21.428428    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:21.428440    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:21.453729    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:21.453739    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:21.491792    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:21.491805    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:21.506223    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:21.506238    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:21.518549    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:21.518562    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:21.530809    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:21.530819    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:21.565786    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:21.565801    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:21.579576    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:21.579587    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:21.590998    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:21.591010    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:21.604809    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:21.604819    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:21.616633    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:21.616650    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:21.631423    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:21.631436    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:21.643586    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:21.643600    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:21.681594    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:21.681608    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:21.685868    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:21.685877    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:24.201893    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:29.204303    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:29.204577    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:29.222955    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:29.223066    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:29.236540    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:29.236629    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:29.254887    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:29.254968    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:29.265503    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:29.265592    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:29.275826    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:29.275904    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:29.286987    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:29.287072    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:29.296926    4508 logs.go:276] 0 containers: []
	W0923 17:22:29.296938    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:29.297009    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:29.307551    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:29.307571    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:29.307576    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:29.321895    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:29.321906    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:29.333253    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:29.333302    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:29.344712    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:29.344723    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:29.359142    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:29.359153    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:29.397542    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:29.397553    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:29.422282    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:29.422292    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:29.426953    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:29.426959    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:29.442400    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:29.442414    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:29.460847    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:29.460858    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:29.500830    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:29.500840    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:29.515165    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:29.515176    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:29.529744    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:29.529755    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:29.541588    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:29.541602    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:29.553292    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:29.553302    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:29.570019    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:29.570029    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:29.582913    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:29.582923    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:32.119739    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:37.122034    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:37.122288    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:37.137003    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:37.137104    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:37.149028    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:37.149118    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:37.159828    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:37.159916    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:37.170670    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:37.170758    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:37.180666    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:37.180751    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:37.190948    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:37.191034    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:37.201251    4508 logs.go:276] 0 containers: []
	W0923 17:22:37.201262    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:37.201339    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:37.211621    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:37.211638    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:37.211643    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:37.250542    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:37.250553    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:37.288675    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:37.288690    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:37.300640    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:37.300650    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:37.312444    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:37.312455    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:37.316534    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:37.316543    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:37.330984    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:37.330994    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:37.342022    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:37.342036    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:37.366814    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:37.366825    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:37.380860    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:37.380871    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:37.418554    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:37.418565    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:37.432817    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:37.432827    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:37.449569    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:37.449585    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:37.463576    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:37.463586    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:37.481164    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:37.481174    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:37.492114    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:37.492130    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:37.504255    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:37.504265    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:40.017581    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:45.020226    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:45.020537    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:45.046296    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:45.046450    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:45.063799    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:45.063909    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:45.079588    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:45.079677    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:45.091253    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:45.091342    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:45.101795    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:45.101872    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:45.114186    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:45.114267    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:45.124902    4508 logs.go:276] 0 containers: []
	W0923 17:22:45.124917    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:45.124986    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:45.135198    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:45.135217    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:45.135222    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:45.161204    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:45.161216    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:45.165361    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:45.165371    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:45.176323    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:45.176335    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:45.193596    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:45.193606    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:45.208403    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:45.208417    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:45.246430    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:45.246446    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:45.260476    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:45.260486    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:45.272211    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:45.272228    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:45.284271    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:45.284282    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:45.298616    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:45.298627    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:45.312512    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:45.312527    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:45.327163    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:45.327174    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:45.339951    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:45.339962    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:45.377360    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:45.377371    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:45.414164    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:45.414175    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:45.429042    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:45.429053    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:47.948264    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:22:52.951002    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:22:52.951321    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:22:52.977153    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:22:52.977281    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:22:52.993604    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:22:52.993702    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:22:53.006108    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:22:53.006200    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:22:53.017606    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:22:53.017695    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:22:53.027861    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:22:53.027942    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:22:53.039846    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:22:53.039928    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:22:53.056727    4508 logs.go:276] 0 containers: []
	W0923 17:22:53.056739    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:22:53.056814    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:22:53.066656    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:22:53.066674    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:22:53.066680    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:22:53.104405    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:22:53.104417    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:22:53.116966    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:22:53.116976    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:22:53.128452    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:22:53.128463    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:22:53.151823    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:22:53.151835    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:22:53.188902    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:22:53.188908    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:22:53.192737    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:22:53.192747    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:22:53.226700    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:22:53.226712    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:22:53.242848    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:22:53.242859    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:22:53.259825    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:22:53.259836    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:22:53.277207    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:22:53.277218    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:22:53.290963    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:22:53.290973    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:22:53.303499    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:22:53.303509    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:22:53.318674    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:22:53.318682    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:22:53.331334    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:22:53.331345    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:22:53.349382    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:22:53.349394    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:22:53.362456    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:22:53.362470    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:22:55.877415    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:00.879100    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:00.879292    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:00.893241    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:00.893338    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:00.907557    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:00.907649    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:00.918472    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:00.918548    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:00.930087    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:00.930171    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:00.940453    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:00.940535    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:00.950881    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:00.950958    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:00.965326    4508 logs.go:276] 0 containers: []
	W0923 17:23:00.965341    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:00.965414    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:00.976384    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:00.976404    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:00.976409    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:00.994998    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:00.995009    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:01.006659    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:01.006672    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:01.021400    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:01.021411    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:01.033295    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:01.033307    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:01.047570    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:01.047585    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:01.072947    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:01.072956    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:01.084619    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:01.084629    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:01.124265    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:01.124276    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:01.139113    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:01.139130    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:01.185970    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:01.185982    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:01.200732    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:01.200749    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:01.205042    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:01.205049    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:01.217344    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:01.217354    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:01.242453    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:01.242468    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:01.281414    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:01.281430    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:01.294185    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:01.294199    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:03.808933    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:08.811117    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:08.811391    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:08.839024    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:08.839141    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:08.853795    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:08.853894    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:08.866255    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:08.866349    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:08.877312    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:08.877399    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:08.887822    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:08.887909    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:08.898328    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:08.898414    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:08.908631    4508 logs.go:276] 0 containers: []
	W0923 17:23:08.908645    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:08.908720    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:08.919128    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:08.919151    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:08.919155    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:08.956914    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:08.956923    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:08.971161    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:08.971176    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:08.983381    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:08.983395    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:09.008618    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:09.008632    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:09.046783    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:09.046801    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:09.051763    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:09.051776    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:09.066160    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:09.066171    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:09.081176    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:09.081193    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:09.094298    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:09.094312    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:09.108208    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:09.108217    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:09.121273    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:09.121286    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:09.164231    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:09.164246    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:09.177146    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:09.177160    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:09.201091    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:09.201110    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:09.214106    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:09.214119    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:09.227129    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:09.227142    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:11.747964    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:16.750270    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:16.750493    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:16.770955    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:16.771073    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:16.785324    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:16.785419    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:16.797267    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:16.797343    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:16.810172    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:16.810261    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:16.821677    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:16.821764    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:16.834518    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:16.834598    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:16.846253    4508 logs.go:276] 0 containers: []
	W0923 17:23:16.846266    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:16.846341    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:16.857549    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:16.857568    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:16.857573    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:16.875132    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:16.875145    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:16.896356    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:16.896372    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:16.911727    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:16.911742    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:16.953751    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:16.953768    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:16.989827    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:16.989838    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:17.002968    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:17.002976    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:17.015609    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:17.015618    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:17.035491    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:17.035501    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:17.050663    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:17.050678    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:17.065889    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:17.065900    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:17.078934    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:17.078945    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:17.098598    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:17.098609    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:17.114554    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:17.114566    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:17.119686    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:17.119698    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:17.160235    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:17.160250    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:17.185090    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:17.185102    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:19.696622    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:24.697692    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:24.697807    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:24.710601    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:24.710693    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:24.721800    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:24.721892    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:24.733451    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:24.733538    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:24.745650    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:24.745737    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:24.756961    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:24.757052    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:24.768461    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:24.768551    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:24.779496    4508 logs.go:276] 0 containers: []
	W0923 17:23:24.779508    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:24.779584    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:24.790563    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:24.790581    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:24.790589    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:24.806237    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:24.806251    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:24.818602    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:24.818615    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:24.844074    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:24.844083    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:24.885456    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:24.885464    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:24.889928    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:24.889943    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:24.907267    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:24.907279    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:24.919966    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:24.919981    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:24.935845    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:24.935857    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:24.977322    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:24.977333    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:24.989015    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:24.989028    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:25.001894    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:25.001905    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:25.039429    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:25.039440    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:25.051512    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:25.051522    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:25.062812    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:25.062824    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:25.076691    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:25.076701    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:25.094322    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:25.094333    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:27.608718    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:32.610875    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:32.610970    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:32.622557    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:32.622646    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:32.634396    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:32.634479    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:32.646121    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:32.646205    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:32.657351    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:32.657438    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:32.668504    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:32.668586    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:32.683381    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:32.683468    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:32.694864    4508 logs.go:276] 0 containers: []
	W0923 17:23:32.694880    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:32.694954    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:32.706052    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:32.706072    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:32.706078    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:32.753212    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:32.753224    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:32.772290    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:32.772305    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:32.786755    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:32.786767    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:32.805181    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:32.805190    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:32.818430    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:32.818443    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:32.858304    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:32.858318    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:32.873296    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:32.873308    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:32.885458    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:32.885469    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:32.896527    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:32.896539    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:32.932158    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:32.932172    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:32.948636    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:32.948650    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:32.963794    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:32.963806    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:32.987554    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:32.987562    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:32.991978    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:32.991984    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:33.006175    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:33.006187    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:33.018524    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:33.018537    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:35.532146    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:40.534353    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:40.534455    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:40.546321    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:40.546414    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:40.559189    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:40.559281    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:40.570952    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:40.571040    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:40.586426    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:40.586519    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:40.598474    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:40.598562    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:40.610977    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:40.611063    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:40.621831    4508 logs.go:276] 0 containers: []
	W0923 17:23:40.621844    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:40.621918    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:40.633564    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:40.633586    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:40.633591    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:40.647148    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:40.647160    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:40.662605    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:40.662618    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:40.678071    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:40.678086    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:40.698489    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:40.698498    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:40.711056    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:40.711069    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:40.749870    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:40.749881    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:40.754057    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:40.754064    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:40.792056    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:40.792072    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:40.806188    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:40.806202    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:40.817043    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:40.817055    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:40.852290    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:40.852305    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:40.869877    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:40.869893    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:40.881225    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:40.881237    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:40.905453    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:40.905460    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:40.919464    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:40.919479    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:40.931249    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:40.931265    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:43.444858    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:48.447073    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:48.447174    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:48.458043    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:48.458131    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:48.469667    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:48.469757    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:48.485586    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:48.485676    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:48.497198    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:48.497287    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:48.508811    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:48.508887    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:48.520598    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:48.520691    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:48.531649    4508 logs.go:276] 0 containers: []
	W0923 17:23:48.531662    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:48.531737    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:48.543193    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:48.543212    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:48.543217    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:48.555673    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:48.555684    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:48.568095    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:48.568107    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:48.572728    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:48.572736    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:48.584792    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:48.584808    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:48.597223    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:48.597235    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:48.609411    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:48.609424    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:48.648433    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:48.648441    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:48.662940    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:48.662955    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:48.682062    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:48.682077    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:48.716443    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:48.716458    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:48.755237    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:48.755248    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:48.773784    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:48.773797    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:48.785177    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:48.785191    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:48.799922    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:48.799932    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:48.816778    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:48.816787    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:48.836340    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:48.836353    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:51.363262    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:23:56.365542    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:23:56.365638    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:23:56.376878    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:23:56.376963    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:23:56.387705    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:23:56.387794    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:23:56.398010    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:23:56.398099    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:23:56.408362    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:23:56.408447    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:23:56.418915    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:23:56.419004    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:23:56.429274    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:23:56.429351    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:23:56.439608    4508 logs.go:276] 0 containers: []
	W0923 17:23:56.439620    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:23:56.439689    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:23:56.450565    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:23:56.450586    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:23:56.450591    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:23:56.464400    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:23:56.464409    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:23:56.476172    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:23:56.476185    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:23:56.515264    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:23:56.515273    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:23:56.529615    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:23:56.529625    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:23:56.551186    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:23:56.551201    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:23:56.573710    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:23:56.573718    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:23:56.577480    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:23:56.577485    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:23:56.612571    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:23:56.612587    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:23:56.630976    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:23:56.630989    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:23:56.642855    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:23:56.642866    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:23:56.659976    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:23:56.659992    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:23:56.673621    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:23:56.673638    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:23:56.687809    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:23:56.687824    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:23:56.701582    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:23:56.701592    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:23:56.739618    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:23:56.739628    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:23:56.751102    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:23:56.751117    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:23:59.265557    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:04.267759    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:04.267867    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:04.278929    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:04.279022    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:04.293612    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:04.293696    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:04.303872    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:04.303952    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:04.314319    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:04.314413    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:04.324812    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:04.324896    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:04.335592    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:04.335678    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:04.345823    4508 logs.go:276] 0 containers: []
	W0923 17:24:04.345835    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:04.345917    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:04.357708    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:04.357729    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:04.357735    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:04.369339    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:04.369352    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:04.373706    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:04.373713    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:04.410133    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:04.410148    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:04.421978    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:04.421990    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:04.445850    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:04.445858    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:04.461254    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:04.461264    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:04.475475    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:04.475489    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:04.493845    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:04.493859    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:04.505830    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:04.505845    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:04.519732    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:04.519745    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:04.557667    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:04.557681    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:04.571462    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:04.571472    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:04.585998    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:04.586013    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:04.625447    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:04.625455    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:04.636995    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:04.637005    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:04.648261    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:04.648272    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:07.161326    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:12.161935    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:12.162060    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:12.174077    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:12.174164    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:12.185921    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:12.186005    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:12.197043    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:12.197120    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:12.207785    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:12.207866    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:12.217755    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:12.217840    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:12.231161    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:12.231233    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:12.241611    4508 logs.go:276] 0 containers: []
	W0923 17:24:12.241623    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:12.241699    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:12.252191    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:12.252209    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:12.252216    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:12.290526    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:12.290541    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:12.305550    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:12.305560    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:12.317499    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:12.317510    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:12.334887    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:12.334900    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:12.348105    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:12.348118    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:12.389290    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:12.389301    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:12.403638    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:12.403653    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:12.414921    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:12.414933    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:12.426192    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:12.426207    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:12.430520    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:12.430529    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:12.444164    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:12.444178    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:12.456356    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:12.456368    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:12.490552    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:12.490567    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:12.505569    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:12.505583    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:12.518154    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:12.518164    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:12.529779    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:12.529790    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:15.055563    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:20.057839    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:20.058046    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:20.083545    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:20.083629    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:20.098176    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:20.098248    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:20.108416    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:20.108502    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:20.119476    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:20.119559    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:20.130132    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:20.130202    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:20.140885    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:20.140950    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:20.151371    4508 logs.go:276] 0 containers: []
	W0923 17:24:20.151384    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:20.151457    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:20.162228    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:20.162245    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:20.162250    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:20.176407    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:20.176420    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:20.195345    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:20.195359    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:20.211628    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:20.211644    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:20.223146    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:20.223157    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:20.227564    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:20.227571    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:20.241925    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:20.241936    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:20.277379    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:20.277390    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:20.289514    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:20.289527    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:20.311315    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:20.311323    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:20.322433    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:20.322445    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:20.336871    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:20.336885    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:20.349641    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:20.349657    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:20.362762    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:20.362785    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:20.385296    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:20.385310    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:20.399821    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:20.399837    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:20.439380    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:20.439390    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:22.985105    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:27.987451    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:27.987629    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:27.998777    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:27.998868    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:28.009249    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:28.009340    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:28.019760    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:28.019844    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:28.031053    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:28.031138    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:28.051519    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:28.051603    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:28.062532    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:28.062612    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:28.072393    4508 logs.go:276] 0 containers: []
	W0923 17:24:28.072405    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:28.072469    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:28.082849    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:28.082866    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:28.082871    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:28.100649    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:28.100660    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:28.123173    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:28.123185    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:28.162303    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:28.162320    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:28.199336    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:28.199346    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:28.219395    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:28.219406    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:28.232186    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:28.232198    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:28.244485    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:28.244496    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:28.266867    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:28.266876    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:28.301651    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:28.301666    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:28.316006    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:28.316016    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:28.330857    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:28.330868    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:28.343000    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:28.343012    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:28.348240    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:28.348253    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:28.363589    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:28.363599    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:28.375260    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:28.375271    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:28.395605    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:28.395620    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:30.910994    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:35.913324    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:35.913492    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:35.929198    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:35.929298    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:35.939811    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:35.939900    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:35.950529    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:35.950608    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:35.962797    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:35.962881    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:35.978151    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:35.978237    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:35.989453    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:35.989536    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:36.000251    4508 logs.go:276] 0 containers: []
	W0923 17:24:36.000264    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:36.000336    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:36.010996    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:36.011016    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:36.011021    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:36.034812    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:36.034821    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:36.068310    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:36.068325    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:36.083234    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:36.083245    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:36.094593    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:36.094605    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:36.106201    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:36.106213    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:36.124997    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:36.125007    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:36.137382    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:36.137393    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:36.176320    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:36.176334    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:36.180856    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:36.180865    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:36.224358    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:36.224376    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:36.237604    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:36.237616    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:36.250165    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:36.250176    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:36.264641    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:36.264651    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:36.279392    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:36.279405    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:36.296097    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:36.296111    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:36.308293    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:36.308309    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:38.823688    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:43.825262    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:43.825440    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:43.842220    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:43.842326    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:43.859780    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:43.859873    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:43.871061    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:43.871143    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:43.881698    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:43.881787    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:43.892333    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:43.892423    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:43.903204    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:43.903289    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:43.913218    4508 logs.go:276] 0 containers: []
	W0923 17:24:43.913235    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:43.913311    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:43.923934    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:43.923953    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:43.923958    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:43.957988    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:43.958004    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:43.995331    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:43.995345    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:44.012535    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:44.012549    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:44.026255    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:44.026269    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:44.038178    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:44.038193    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:44.042388    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:44.042398    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:44.056732    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:44.056742    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:44.071583    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:44.071594    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:44.083316    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:44.083325    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:44.098809    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:44.098824    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:44.122552    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:44.122565    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:44.161775    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:44.161786    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:44.184425    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:44.184433    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:44.207397    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:44.207409    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:44.223465    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:44.223476    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:44.243261    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:44.243277    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:46.757364    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:51.759638    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:51.759888    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:24:51.781518    4508 logs.go:276] 2 containers: [7b74f5c065d7 d197e6aae6df]
	I0923 17:24:51.781639    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:24:51.796808    4508 logs.go:276] 2 containers: [c141e927b7f4 d90f22288f74]
	I0923 17:24:51.796910    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:24:51.809528    4508 logs.go:276] 1 containers: [808e4297a92d]
	I0923 17:24:51.809607    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:24:51.820423    4508 logs.go:276] 2 containers: [c7cc55b6e894 f23fdf4a3c0e]
	I0923 17:24:51.820514    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:24:51.830664    4508 logs.go:276] 1 containers: [49a08bc36b02]
	I0923 17:24:51.830741    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:24:51.840944    4508 logs.go:276] 2 containers: [ff29c569e42d d3412f726c41]
	I0923 17:24:51.841031    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:24:51.851365    4508 logs.go:276] 0 containers: []
	W0923 17:24:51.851376    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:24:51.851447    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:24:51.861567    4508 logs.go:276] 2 containers: [966e66850c58 6911d1882836]
	I0923 17:24:51.861582    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:24:51.861587    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:24:51.865551    4508 logs.go:123] Gathering logs for kube-apiserver [d197e6aae6df] ...
	I0923 17:24:51.865559    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d197e6aae6df"
	I0923 17:24:51.902546    4508 logs.go:123] Gathering logs for kube-controller-manager [ff29c569e42d] ...
	I0923 17:24:51.902557    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff29c569e42d"
	I0923 17:24:51.919500    4508 logs.go:123] Gathering logs for storage-provisioner [6911d1882836] ...
	I0923 17:24:51.919511    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6911d1882836"
	I0923 17:24:51.930770    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:24:51.930782    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:24:51.943225    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:24:51.943237    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:24:51.977722    4508 logs.go:123] Gathering logs for etcd [c141e927b7f4] ...
	I0923 17:24:51.977738    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c141e927b7f4"
	I0923 17:24:51.992176    4508 logs.go:123] Gathering logs for etcd [d90f22288f74] ...
	I0923 17:24:51.992190    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d90f22288f74"
	I0923 17:24:52.006164    4508 logs.go:123] Gathering logs for kube-proxy [49a08bc36b02] ...
	I0923 17:24:52.006180    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a08bc36b02"
	I0923 17:24:52.017692    4508 logs.go:123] Gathering logs for kube-controller-manager [d3412f726c41] ...
	I0923 17:24:52.017706    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3412f726c41"
	I0923 17:24:52.030149    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:24:52.030165    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:24:52.067413    4508 logs.go:123] Gathering logs for kube-apiserver [7b74f5c065d7] ...
	I0923 17:24:52.067426    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b74f5c065d7"
	I0923 17:24:52.081988    4508 logs.go:123] Gathering logs for coredns [808e4297a92d] ...
	I0923 17:24:52.081999    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 808e4297a92d"
	I0923 17:24:52.093584    4508 logs.go:123] Gathering logs for kube-scheduler [c7cc55b6e894] ...
	I0923 17:24:52.093597    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7cc55b6e894"
	I0923 17:24:52.104866    4508 logs.go:123] Gathering logs for kube-scheduler [f23fdf4a3c0e] ...
	I0923 17:24:52.104876    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f23fdf4a3c0e"
	I0923 17:24:52.120434    4508 logs.go:123] Gathering logs for storage-provisioner [966e66850c58] ...
	I0923 17:24:52.120445    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 966e66850c58"
	I0923 17:24:52.131788    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:24:52.131801    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:24:54.656081    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:24:59.658431    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:24:59.658495    4508 kubeadm.go:597] duration metric: took 4m3.885519125s to restartPrimaryControlPlane
	W0923 17:24:59.658573    4508 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0923 17:24:59.658600    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0923 17:25:00.648521    4508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 17:25:00.653675    4508 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 17:25:00.656618    4508 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 17:25:00.659413    4508 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 17:25:00.659419    4508 kubeadm.go:157] found existing configuration files:
	
	I0923 17:25:00.659445    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf
	I0923 17:25:00.662004    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 17:25:00.662036    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 17:25:00.664639    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf
	I0923 17:25:00.667888    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 17:25:00.667916    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 17:25:00.671166    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf
	I0923 17:25:00.673701    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 17:25:00.673728    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 17:25:00.676514    4508 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf
	I0923 17:25:00.679140    4508 kubeadm.go:163] "https://control-plane.minikube.internal:50528" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50528 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 17:25:00.679170    4508 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 17:25:00.681848    4508 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 17:25:00.697744    4508 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0923 17:25:00.697817    4508 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 17:25:00.746918    4508 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 17:25:00.746984    4508 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 17:25:00.747046    4508 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 17:25:00.794786    4508 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 17:25:00.802008    4508 out.go:235]   - Generating certificates and keys ...
	I0923 17:25:00.802044    4508 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 17:25:00.802078    4508 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 17:25:00.802117    4508 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 17:25:00.802149    4508 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 17:25:00.802195    4508 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 17:25:00.802224    4508 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 17:25:00.802257    4508 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 17:25:00.802291    4508 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 17:25:00.802328    4508 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 17:25:00.802369    4508 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 17:25:00.802398    4508 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 17:25:00.802431    4508 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 17:25:00.841130    4508 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 17:25:00.921899    4508 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 17:25:01.017865    4508 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 17:25:01.414135    4508 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 17:25:01.442257    4508 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 17:25:01.442649    4508 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 17:25:01.442674    4508 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 17:25:01.537344    4508 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 17:25:01.545488    4508 out.go:235]   - Booting up control plane ...
	I0923 17:25:01.545541    4508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 17:25:01.545583    4508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 17:25:01.545618    4508 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 17:25:01.545659    4508 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 17:25:01.545754    4508 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 17:25:06.040119    4508 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501354 seconds
	I0923 17:25:06.040187    4508 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 17:25:06.043632    4508 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 17:25:06.556074    4508 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 17:25:06.556425    4508 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-180000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 17:25:07.060396    4508 kubeadm.go:310] [bootstrap-token] Using token: v1uqfy.5rc75n0j3i4peg2o
	I0923 17:25:07.066210    4508 out.go:235]   - Configuring RBAC rules ...
	I0923 17:25:07.066323    4508 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 17:25:07.066459    4508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 17:25:07.072767    4508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 17:25:07.073709    4508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 17:25:07.074754    4508 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 17:25:07.075766    4508 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 17:25:07.079087    4508 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 17:25:07.275210    4508 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 17:25:07.464851    4508 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 17:25:07.465340    4508 kubeadm.go:310] 
	I0923 17:25:07.465371    4508 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 17:25:07.465375    4508 kubeadm.go:310] 
	I0923 17:25:07.465420    4508 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 17:25:07.465426    4508 kubeadm.go:310] 
	I0923 17:25:07.465438    4508 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 17:25:07.465488    4508 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 17:25:07.465518    4508 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 17:25:07.465521    4508 kubeadm.go:310] 
	I0923 17:25:07.465553    4508 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 17:25:07.465556    4508 kubeadm.go:310] 
	I0923 17:25:07.465581    4508 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 17:25:07.465584    4508 kubeadm.go:310] 
	I0923 17:25:07.465609    4508 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 17:25:07.465665    4508 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 17:25:07.465704    4508 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 17:25:07.465710    4508 kubeadm.go:310] 
	I0923 17:25:07.465762    4508 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 17:25:07.465802    4508 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 17:25:07.465805    4508 kubeadm.go:310] 
	I0923 17:25:07.465868    4508 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v1uqfy.5rc75n0j3i4peg2o \
	I0923 17:25:07.465943    4508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9f5effcd2afcb047ae3a6a2be3abef4aeae2e1c83fa3875bd26ffc0e053ab789 \
	I0923 17:25:07.465955    4508 kubeadm.go:310] 	--control-plane 
	I0923 17:25:07.465957    4508 kubeadm.go:310] 
	I0923 17:25:07.466025    4508 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 17:25:07.466029    4508 kubeadm.go:310] 
	I0923 17:25:07.466088    4508 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v1uqfy.5rc75n0j3i4peg2o \
	I0923 17:25:07.466146    4508 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9f5effcd2afcb047ae3a6a2be3abef4aeae2e1c83fa3875bd26ffc0e053ab789 
	I0923 17:25:07.466208    4508 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 17:25:07.466218    4508 cni.go:84] Creating CNI manager for ""
	I0923 17:25:07.466227    4508 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:25:07.470731    4508 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 17:25:07.478739    4508 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 17:25:07.481650    4508 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 17:25:07.486275    4508 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 17:25:07.486317    4508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 17:25:07.486345    4508 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-180000 minikube.k8s.io/updated_at=2024_09_23T17_25_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=stopped-upgrade-180000 minikube.k8s.io/primary=true
	I0923 17:25:07.529919    4508 kubeadm.go:1113] duration metric: took 43.636208ms to wait for elevateKubeSystemPrivileges
	I0923 17:25:07.529927    4508 ops.go:34] apiserver oom_adj: -16
	I0923 17:25:07.529936    4508 kubeadm.go:394] duration metric: took 4m11.770601792s to StartCluster
	I0923 17:25:07.529945    4508 settings.go:142] acquiring lock: {Name:mk533b8e20cbdc896b9e0666ee546603a1b156f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:25:07.530032    4508 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:25:07.530433    4508 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/kubeconfig: {Name:mk52c76fc8ff944a7bcab52e821c0354dabfa3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:25:07.530655    4508 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:25:07.530663    4508 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 17:25:07.530697    4508 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-180000"
	I0923 17:25:07.530707    4508 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-180000"
	W0923 17:25:07.530710    4508 addons.go:243] addon storage-provisioner should already be in state true
	I0923 17:25:07.530721    4508 host.go:66] Checking if "stopped-upgrade-180000" exists ...
	I0923 17:25:07.530732    4508 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:25:07.530770    4508 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-180000"
	I0923 17:25:07.530775    4508 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-180000"
	I0923 17:25:07.531704    4508 kapi.go:59] client config for stopped-upgrade-180000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/stopped-upgrade-180000/client.key", CAFile:"/Users/jenkins/minikube-integration/19696-1109/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10287a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 17:25:07.531826    4508 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-180000"
	W0923 17:25:07.531830    4508 addons.go:243] addon default-storageclass should already be in state true
	I0923 17:25:07.531837    4508 host.go:66] Checking if "stopped-upgrade-180000" exists ...
	I0923 17:25:07.533662    4508 out.go:177] * Verifying Kubernetes components...
	I0923 17:25:07.534002    4508 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 17:25:07.537855    4508 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 17:25:07.537861    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0923 17:25:07.541618    4508 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 17:25:07.545668    4508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 17:25:07.549742    4508 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 17:25:07.549749    4508 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 17:25:07.549755    4508 sshutil.go:53] new ssh client: &{IP:localhost Port:50494 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/stopped-upgrade-180000/id_rsa Username:docker}
	I0923 17:25:07.635938    4508 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 17:25:07.641856    4508 api_server.go:52] waiting for apiserver process to appear ...
	I0923 17:25:07.641901    4508 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 17:25:07.645984    4508 api_server.go:72] duration metric: took 115.320125ms to wait for apiserver process to appear ...
	I0923 17:25:07.645992    4508 api_server.go:88] waiting for apiserver healthz status ...
	I0923 17:25:07.646000    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:07.651629    4508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 17:25:07.707092    4508 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 17:25:08.018498    4508 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 17:25:08.018510    4508 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 17:25:12.648056    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:12.648110    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:17.648395    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:17.648428    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:22.649096    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:22.649117    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:27.649748    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:27.649787    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:32.650460    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:32.650489    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:37.651304    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:37.651329    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0923 17:25:38.020524    4508 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0923 17:25:38.029743    4508 out.go:177] * Enabled addons: storage-provisioner
	I0923 17:25:38.037706    4508 addons.go:510] duration metric: took 30.507258541s for enable addons: enabled=[storage-provisioner]
	I0923 17:25:42.652420    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:42.652469    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:47.654260    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:47.654318    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:52.656192    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:52.656237    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:25:57.658512    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:25:57.658536    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:26:02.660548    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:26:02.660569    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:26:07.662713    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:26:07.662845    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:26:07.674544    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:26:07.674630    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:26:07.685320    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:26:07.685411    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:26:07.695313    4508 logs.go:276] 2 containers: [5e60256ac43b aaa92bcb160c]
	I0923 17:26:07.695395    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:26:07.708459    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:26:07.708539    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:26:07.718851    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:26:07.718939    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:26:07.733426    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:26:07.733517    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:26:07.743424    4508 logs.go:276] 0 containers: []
	W0923 17:26:07.743436    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:26:07.743504    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:26:07.754087    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:26:07.754105    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:26:07.754112    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:26:07.758710    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:26:07.758721    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:26:07.798841    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:26:07.798857    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:26:07.813725    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:26:07.813738    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:26:07.828141    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:26:07.828152    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:26:07.840386    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:26:07.840401    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:26:07.857382    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:26:07.857395    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:26:07.872655    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:26:07.872667    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:26:07.908241    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:26:07.908248    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:26:07.932936    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:26:07.932944    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:26:07.947735    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:26:07.947749    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:26:07.959770    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:26:07.959781    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:26:07.971366    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:26:07.971379    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:26:10.485143    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:26:15.487513    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:26:15.487627    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:26:15.498837    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:26:15.498926    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:26:15.510246    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:26:15.510325    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:26:15.521140    4508 logs.go:276] 2 containers: [5e60256ac43b aaa92bcb160c]
	I0923 17:26:15.521228    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:26:15.531618    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:26:15.531710    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:26:15.546283    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:26:15.546378    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:26:15.557903    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:26:15.557988    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:26:15.572459    4508 logs.go:276] 0 containers: []
	W0923 17:26:15.572470    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:26:15.572542    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:26:15.582671    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:26:15.582690    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:26:15.582697    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:26:15.598285    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:26:15.598294    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:26:15.609883    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:26:15.609895    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:26:15.645576    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:26:15.645588    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:26:15.650312    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:26:15.650320    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:26:15.666847    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:26:15.666858    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:26:15.678246    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:26:15.678258    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:26:15.690061    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:26:15.690072    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:26:15.715117    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:26:15.715134    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:26:15.726526    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:26:15.726539    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:26:15.766214    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:26:15.766225    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:26:15.780264    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:26:15.780280    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:26:15.792383    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:26:15.792397    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:26:18.314425    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:26:23.316922    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:26:23.317497    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:26:23.360177    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:26:23.360340    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:26:23.381256    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:26:23.381387    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:26:23.396473    4508 logs.go:276] 2 containers: [5e60256ac43b aaa92bcb160c]
	I0923 17:26:23.396553    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:26:23.408816    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:26:23.408891    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:26:23.419464    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:26:23.419535    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:26:23.429950    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:26:23.430038    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:26:23.440619    4508 logs.go:276] 0 containers: []
	W0923 17:26:23.440631    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:26:23.440699    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:26:23.451161    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:26:23.451178    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:26:23.451183    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:26:23.466454    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:26:23.466467    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:26:23.491615    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:26:23.491623    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:26:23.527470    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:26:23.527485    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:26:23.545181    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:26:23.545194    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:26:23.557115    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:26:23.557126    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:26:23.568361    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:26:23.568374    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:26:23.580061    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:26:23.580073    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:26:23.597977    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:26:23.597988    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:26:23.610128    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:26:23.610140    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:26:23.624374    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:26:23.624386    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:26:23.659027    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:26:23.659034    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:26:23.663291    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:26:23.663300    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:26:26.181366    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:26:31.184120    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:26:31.184687    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:26:31.224125    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:26:31.224272    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:26:31.244713    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:26:31.244845    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:26:31.259365    4508 logs.go:276] 2 containers: [5e60256ac43b aaa92bcb160c]
	I0923 17:26:31.259449    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:26:31.271219    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:26:31.271301    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:26:31.284338    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:26:31.284421    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:26:31.294733    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:26:31.294809    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:26:31.304809    4508 logs.go:276] 0 containers: []
	W0923 17:26:31.304819    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:26:31.304885    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:26:31.315295    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:26:31.315313    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:26:31.315319    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:26:31.355766    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:26:31.355778    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:26:31.370767    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:26:31.370778    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:26:31.382540    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:26:31.382556    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:26:31.399716    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:26:31.399731    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:26:31.410990    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:26:31.411001    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:26:31.422376    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:26:31.422391    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:26:31.455984    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:26:31.455992    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:26:31.459846    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:26:31.459855    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:26:31.474465    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:26:31.474477    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:26:31.488231    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:26:31.488243    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:26:31.508627    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:26:31.508636    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:26:31.522893    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:26:31.522908    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:26:34.048670    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:26:39.051406    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:26:39.051971    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:26:39.098278    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:26:39.098431    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:26:39.121632    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:26:39.121729    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:26:39.134844    4508 logs.go:276] 2 containers: [5e60256ac43b aaa92bcb160c]
	I0923 17:26:39.134938    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:26:39.146666    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:26:39.146740    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:26:39.157392    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:26:39.157484    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:26:39.168357    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:26:39.168464    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:26:39.184086    4508 logs.go:276] 0 containers: []
	W0923 17:26:39.184098    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:26:39.184164    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:26:39.195301    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:26:39.195316    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:26:39.195322    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:26:39.207413    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:26:39.207429    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:26:39.218898    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:26:39.218913    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:26:39.233710    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:26:39.233722    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:26:39.245159    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:26:39.245171    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:26:39.281635    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:26:39.281647    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:26:39.285794    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:26:39.285803    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:26:39.321661    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:26:39.321674    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:26:39.336040    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:26:39.336055    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:26:39.354813    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:26:39.354826    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:26:39.366301    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:26:39.366311    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:26:39.384081    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:26:39.384091    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:26:39.407855    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:26:39.407870    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:26:41.921552    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:26:46.922219    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:26:46.922704    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:26:46.957022    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:26:46.957186    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:26:46.978270    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:26:46.978388    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:26:46.994793    4508 logs.go:276] 2 containers: [5e60256ac43b aaa92bcb160c]
	I0923 17:26:46.994897    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:26:47.006836    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:26:47.006921    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:26:47.017439    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:26:47.017527    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:26:47.028057    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:26:47.028137    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:26:47.038648    4508 logs.go:276] 0 containers: []
	W0923 17:26:47.038662    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:26:47.038731    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:26:47.049367    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:26:47.049385    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:26:47.049393    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:26:47.083622    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:26:47.083633    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:26:47.098246    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:26:47.098257    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:26:47.110127    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:26:47.110137    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:26:47.125568    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:26:47.125579    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:26:47.137498    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:26:47.137507    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:26:47.150597    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:26:47.150612    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:26:47.155437    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:26:47.155444    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:26:47.169635    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:26:47.169646    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:26:47.181356    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:26:47.181366    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:26:47.198967    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:26:47.198982    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:26:47.214894    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:26:47.214905    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:26:47.239419    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:26:47.239427    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:26:49.776393    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:26:54.778852    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:26:54.779380    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:26:54.821696    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:26:54.821851    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:26:54.845826    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:26:54.845956    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:26:54.860719    4508 logs.go:276] 2 containers: [5e60256ac43b aaa92bcb160c]
	I0923 17:26:54.860812    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:26:54.873075    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:26:54.873159    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:26:54.885551    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:26:54.885627    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:26:54.896114    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:26:54.896189    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:26:54.906034    4508 logs.go:276] 0 containers: []
	W0923 17:26:54.906044    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:26:54.906104    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:26:54.916820    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:26:54.916834    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:26:54.916840    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:26:54.929159    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:26:54.929170    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:26:54.963657    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:26:54.963670    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:26:54.979067    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:26:54.979080    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:26:54.992976    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:26:54.992987    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:26:55.004480    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:26:55.004495    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:26:55.025097    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:26:55.025109    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:26:55.036977    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:26:55.036992    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:26:55.071987    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:26:55.071998    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:26:55.076206    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:26:55.076214    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:26:55.087874    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:26:55.087891    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:26:55.100066    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:26:55.100077    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:26:55.122464    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:26:55.122474    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:26:57.647821    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:27:02.648479    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:27:02.648953    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:27:02.681495    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:27:02.681651    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:27:02.700281    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:27:02.700384    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:27:02.714553    4508 logs.go:276] 2 containers: [5e60256ac43b aaa92bcb160c]
	I0923 17:27:02.714641    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:27:02.726526    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:27:02.726605    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:27:02.737011    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:27:02.737098    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:27:02.748097    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:27:02.748181    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:27:02.758326    4508 logs.go:276] 0 containers: []
	W0923 17:27:02.758341    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:27:02.758417    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:27:02.768642    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:27:02.768656    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:27:02.768662    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:27:02.804042    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:27:02.804059    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:27:02.837522    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:27:02.837533    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:27:02.848649    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:27:02.848662    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:27:02.865353    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:27:02.865362    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:27:02.869717    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:27:02.869727    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:27:02.883666    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:27:02.883676    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:27:02.896999    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:27:02.897010    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:27:02.910292    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:27:02.910303    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:27:02.926201    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:27:02.926211    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:27:02.944408    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:27:02.944424    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:27:02.956081    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:27:02.956092    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:27:02.980814    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:27:02.980821    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:27:05.494068    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:27:10.496903    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:27:10.497381    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:27:10.528420    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:27:10.528575    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:27:10.549319    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:27:10.549466    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:27:10.564071    4508 logs.go:276] 2 containers: [5e60256ac43b aaa92bcb160c]
	I0923 17:27:10.564176    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:27:10.575909    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:27:10.575997    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:27:10.586823    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:27:10.586906    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:27:10.597108    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:27:10.597204    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:27:10.606887    4508 logs.go:276] 0 containers: []
	W0923 17:27:10.606903    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:27:10.606975    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:27:10.617645    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:27:10.617663    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:27:10.617671    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:27:10.630256    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:27:10.630266    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:27:10.664256    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:27:10.664264    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:27:10.698582    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:27:10.698592    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:27:10.712901    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:27:10.712914    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:27:10.724173    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:27:10.724184    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:27:10.735422    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:27:10.735435    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:27:10.747356    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:27:10.747369    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:27:10.764468    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:27:10.764480    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:27:10.788962    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:27:10.788968    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:27:10.793049    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:27:10.793057    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:27:10.806632    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:27:10.806643    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:27:10.821085    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:27:10.821098    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:27:13.334501    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:27:18.337224    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:27:18.337682    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:27:18.367238    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:27:18.367415    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:27:18.389302    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:27:18.389410    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:27:18.402400    4508 logs.go:276] 2 containers: [5e60256ac43b aaa92bcb160c]
	I0923 17:27:18.402487    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:27:18.414113    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:27:18.414186    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:27:18.424806    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:27:18.424876    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:27:18.438702    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:27:18.438767    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:27:18.449038    4508 logs.go:276] 0 containers: []
	W0923 17:27:18.449050    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:27:18.449124    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:27:18.459266    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:27:18.459286    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:27:18.459291    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:27:18.471705    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:27:18.471717    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:27:18.492123    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:27:18.492138    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:27:18.503742    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:27:18.503752    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:27:18.508086    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:27:18.508094    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:27:18.543522    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:27:18.543535    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:27:18.558085    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:27:18.558097    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:27:18.569706    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:27:18.569719    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:27:18.581099    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:27:18.581109    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:27:18.605511    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:27:18.605519    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:27:18.638771    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:27:18.638782    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:27:18.652458    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:27:18.652471    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:27:18.666430    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:27:18.666444    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:27:21.188582    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:27:26.191336    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:27:26.191600    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:27:26.219090    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:27:26.219246    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:27:26.236860    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:27:26.236970    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:27:26.250531    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:27:26.250626    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:27:26.262068    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:27:26.262141    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:27:26.272291    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:27:26.272371    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:27:26.282899    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:27:26.282986    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:27:26.293231    4508 logs.go:276] 0 containers: []
	W0923 17:27:26.293242    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:27:26.293307    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:27:26.307593    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:27:26.307610    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:27:26.307616    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:27:26.321897    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:27:26.321909    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:27:26.332668    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:27:26.332679    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:27:26.347571    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:27:26.347580    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:27:26.365193    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:27:26.365205    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:27:26.376773    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:27:26.376787    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:27:26.390019    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:27:26.390029    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:27:26.402133    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:27:26.402143    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:27:26.436105    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:27:26.436119    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:27:26.449366    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:27:26.449377    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:27:26.461044    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:27:26.461055    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:27:26.493618    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:27:26.493627    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:27:26.497800    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:27:26.497810    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:27:26.508952    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:27:26.508963    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:27:26.523547    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:27:26.523558    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:27:29.050471    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:27:34.052787    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:27:34.053377    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:27:34.094565    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:27:34.094735    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:27:34.121499    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:27:34.121601    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:27:34.135234    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:27:34.135326    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:27:34.146626    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:27:34.146713    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:27:34.156929    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:27:34.157017    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:27:34.171276    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:27:34.171362    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:27:34.182314    4508 logs.go:276] 0 containers: []
	W0923 17:27:34.182333    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:27:34.182409    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:27:34.193575    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:27:34.193595    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:27:34.193600    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:27:34.226871    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:27:34.226880    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:27:34.240786    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:27:34.240795    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:27:34.258633    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:27:34.258644    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:27:34.271124    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:27:34.271136    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:27:34.283459    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:27:34.283471    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:27:34.309011    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:27:34.309022    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:27:34.319862    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:27:34.319873    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:27:34.324183    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:27:34.324191    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:27:34.360493    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:27:34.360504    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:27:34.374054    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:27:34.374065    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:27:34.385530    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:27:34.385545    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:27:34.397443    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:27:34.397459    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:27:34.412737    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:27:34.412748    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:27:34.424657    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:27:34.424667    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:27:36.936283    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:27:41.936410    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:27:41.936503    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:27:41.949243    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:27:41.949327    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:27:41.959786    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:27:41.959859    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:27:41.970570    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:27:41.970655    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:27:41.981507    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:27:41.981586    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:27:41.995501    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:27:41.995578    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:27:42.006191    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:27:42.006264    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:27:42.016464    4508 logs.go:276] 0 containers: []
	W0923 17:27:42.016475    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:27:42.016534    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:27:42.027177    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:27:42.027197    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:27:42.027203    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:27:42.031844    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:27:42.031850    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:27:42.046445    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:27:42.046459    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:27:42.058433    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:27:42.058445    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:27:42.093616    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:27:42.093627    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:27:42.127828    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:27:42.127838    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:27:42.141847    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:27:42.141855    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:27:42.158920    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:27:42.158932    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:27:42.174188    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:27:42.174205    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:27:42.185588    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:27:42.185604    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:27:42.197242    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:27:42.197257    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:27:42.212372    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:27:42.212387    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:27:42.230977    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:27:42.230988    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:27:42.254537    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:27:42.254548    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:27:42.278170    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:27:42.278178    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:27:44.791637    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:27:49.792783    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:27:49.793354    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:27:49.837717    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:27:49.837889    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:27:49.857242    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:27:49.857375    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:27:49.871814    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:27:49.871904    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:27:49.884387    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:27:49.884476    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:27:49.895545    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:27:49.895620    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:27:49.905930    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:27:49.905999    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:27:49.915988    4508 logs.go:276] 0 containers: []
	W0923 17:27:49.916001    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:27:49.916064    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:27:49.926395    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:27:49.926413    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:27:49.926418    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:27:49.941050    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:27:49.941061    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:27:49.958449    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:27:49.958460    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:27:49.984029    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:27:49.984036    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:27:49.995210    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:27:49.995220    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:27:50.007231    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:27:50.007243    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:27:50.020146    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:27:50.020157    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:27:50.053510    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:27:50.053520    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:27:50.088639    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:27:50.088652    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:27:50.100421    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:27:50.100435    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:27:50.111973    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:27:50.111986    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:27:50.129953    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:27:50.129966    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:27:50.135389    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:27:50.135399    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:27:50.149129    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:27:50.149142    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:27:50.163441    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:27:50.163456    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:27:52.677068    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:27:57.677779    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:27:57.678257    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:27:57.726473    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:27:57.726606    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:27:57.746379    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:27:57.746479    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:27:57.759560    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:27:57.759659    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:27:57.771116    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:27:57.771199    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:27:57.784428    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:27:57.784516    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:27:57.795055    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:27:57.795132    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:27:57.805094    4508 logs.go:276] 0 containers: []
	W0923 17:27:57.805104    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:27:57.805175    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:27:57.815777    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:27:57.815798    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:27:57.815804    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:27:57.828315    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:27:57.828324    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:27:57.840182    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:27:57.840194    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:27:57.854099    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:27:57.854111    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:27:57.867600    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:27:57.867615    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:27:57.881546    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:27:57.881556    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:27:57.898359    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:27:57.898369    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:27:57.923450    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:27:57.923457    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:27:57.927850    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:27:57.927856    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:27:57.960852    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:27:57.960861    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:27:57.975540    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:27:57.975550    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:27:57.990551    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:27:57.990561    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:27:58.002780    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:27:58.002790    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:27:58.037955    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:27:58.037962    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:27:58.049259    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:27:58.049274    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:28:00.562779    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:28:05.565453    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:28:05.565580    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:28:05.576497    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:28:05.576580    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:28:05.587870    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:28:05.587942    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:28:05.600057    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:28:05.600128    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:28:05.611895    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:28:05.611959    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:28:05.624472    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:28:05.624544    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:28:05.636051    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:28:05.636131    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:28:05.648106    4508 logs.go:276] 0 containers: []
	W0923 17:28:05.648119    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:28:05.648177    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:28:05.666403    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:28:05.666420    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:28:05.666426    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:28:05.682398    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:28:05.682410    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:28:05.697787    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:28:05.697797    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:28:05.709798    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:28:05.709809    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:28:05.728793    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:28:05.728809    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:28:05.766040    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:28:05.766050    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:28:05.785516    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:28:05.785526    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:28:05.797109    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:28:05.797119    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:28:05.822340    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:28:05.822361    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:28:05.835623    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:28:05.835637    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:28:05.848730    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:28:05.848740    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:28:05.884798    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:28:05.884808    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:28:05.889117    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:28:05.889125    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:28:05.905341    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:28:05.905350    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:28:05.917949    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:28:05.917961    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:28:08.439338    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:28:13.439708    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:28:13.440110    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:28:13.475650    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:28:13.475805    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:28:13.500442    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:28:13.500556    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:28:13.515872    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:28:13.515982    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:28:13.533720    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:28:13.533806    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:28:13.550556    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:28:13.550641    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:28:13.563161    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:28:13.563246    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:28:13.575864    4508 logs.go:276] 0 containers: []
	W0923 17:28:13.575878    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:28:13.575952    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:28:13.588293    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:28:13.588318    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:28:13.588324    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:28:13.603670    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:28:13.603687    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:28:13.616600    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:28:13.616615    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:28:13.638328    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:28:13.638343    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:28:13.675778    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:28:13.675793    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:28:13.680793    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:28:13.680803    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:28:13.693002    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:28:13.693016    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:28:13.705534    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:28:13.705550    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:28:13.741119    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:28:13.741137    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:28:13.752629    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:28:13.752641    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:28:13.767692    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:28:13.767709    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:28:13.792428    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:28:13.792441    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:28:13.807074    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:28:13.807087    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:28:13.819021    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:28:13.819032    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:28:13.830987    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:28:13.830999    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:28:16.347633    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:28:21.349835    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:28:21.350300    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:28:21.382091    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:28:21.382255    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:28:21.399320    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:28:21.399421    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:28:21.413698    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:28:21.413793    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:28:21.425661    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:28:21.425746    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:28:21.436183    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:28:21.436261    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:28:21.446575    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:28:21.446653    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:28:21.457396    4508 logs.go:276] 0 containers: []
	W0923 17:28:21.457409    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:28:21.457480    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:28:21.468055    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:28:21.468075    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:28:21.468081    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:28:21.483054    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:28:21.483067    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:28:21.494788    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:28:21.494798    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:28:21.530521    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:28:21.530531    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:28:21.545011    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:28:21.545022    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:28:21.556594    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:28:21.556605    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:28:21.560915    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:28:21.560922    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:28:21.578518    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:28:21.578529    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:28:21.590095    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:28:21.590108    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:28:21.606077    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:28:21.606093    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:28:21.618033    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:28:21.618047    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:28:21.642492    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:28:21.642502    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:28:21.676214    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:28:21.676221    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:28:21.694238    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:28:21.694252    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:28:21.706147    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:28:21.706163    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:28:24.219694    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:28:29.221222    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:28:29.221304    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:28:29.234308    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:28:29.234384    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:28:29.245584    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:28:29.245657    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:28:29.256307    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:28:29.256386    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:28:29.268383    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:28:29.268460    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:28:29.281169    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:28:29.281239    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:28:29.292812    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:28:29.292881    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:28:29.304014    4508 logs.go:276] 0 containers: []
	W0923 17:28:29.304026    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:28:29.304090    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:28:29.315673    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:28:29.315690    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:28:29.315696    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:28:29.332367    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:28:29.332381    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:28:29.353058    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:28:29.353074    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:28:29.391827    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:28:29.391839    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:28:29.403997    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:28:29.404009    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:28:29.416945    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:28:29.416956    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:28:29.429914    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:28:29.429930    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:28:29.446597    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:28:29.446611    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:28:29.461629    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:28:29.461643    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:28:29.475153    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:28:29.475164    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:28:29.487014    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:28:29.487023    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:28:29.499562    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:28:29.499573    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:28:29.525913    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:28:29.525924    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:28:29.561440    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:28:29.561455    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:28:29.566284    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:28:29.566291    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:28:32.080293    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:28:37.082966    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:28:37.083595    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:28:37.119581    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:28:37.119770    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:28:37.140611    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:28:37.140726    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:28:37.155171    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:28:37.155270    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:28:37.167480    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:28:37.167570    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:28:37.178301    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:28:37.178389    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:28:37.189022    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:28:37.189094    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:28:37.199797    4508 logs.go:276] 0 containers: []
	W0923 17:28:37.199810    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:28:37.199876    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:28:37.210118    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:28:37.210137    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:28:37.210142    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:28:37.243112    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:28:37.243122    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:28:37.256789    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:28:37.256803    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:28:37.268317    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:28:37.268329    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:28:37.279685    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:28:37.279696    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:28:37.292007    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:28:37.292022    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:28:37.310523    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:28:37.310533    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:28:37.322332    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:28:37.322346    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:28:37.347417    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:28:37.347428    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:28:37.361833    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:28:37.361846    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:28:37.373184    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:28:37.373193    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:28:37.390468    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:28:37.390479    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:28:37.394929    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:28:37.394936    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:28:37.429709    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:28:37.429724    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:28:37.442301    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:28:37.442311    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:28:39.955961    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:28:44.958670    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:28:44.959273    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:28:44.999506    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:28:44.999670    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:28:45.021086    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:28:45.021220    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:28:45.036800    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:28:45.036894    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:28:45.049652    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:28:45.049736    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:28:45.060198    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:28:45.060283    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:28:45.071125    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:28:45.071197    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:28:45.085538    4508 logs.go:276] 0 containers: []
	W0923 17:28:45.085547    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:28:45.085610    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:28:45.099400    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:28:45.099418    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:28:45.099424    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:28:45.136180    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:28:45.136194    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:28:45.150856    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:28:45.150867    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:28:45.162457    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:28:45.162469    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:28:45.195791    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:28:45.195803    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:28:45.200603    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:28:45.200609    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:28:45.212765    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:28:45.212777    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:28:45.234051    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:28:45.234061    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:28:45.245866    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:28:45.245877    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:28:45.266791    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:28:45.266815    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:28:45.290615    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:28:45.290627    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:28:45.302490    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:28:45.302502    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:28:45.315755    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:28:45.315765    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:28:45.340623    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:28:45.340635    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:28:45.353630    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:28:45.353644    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:28:47.873046    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:28:52.875374    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:28:52.875481    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:28:52.887378    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:28:52.887459    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:28:52.899509    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:28:52.899587    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:28:52.912082    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:28:52.912152    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:28:52.922823    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:28:52.922899    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:28:52.933665    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:28:52.933751    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:28:52.946535    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:28:52.946622    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:28:52.958589    4508 logs.go:276] 0 containers: []
	W0923 17:28:52.958599    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:28:52.958653    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:28:52.972509    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:28:52.972530    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:28:52.972536    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:28:52.977065    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:28:52.977076    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:28:52.990900    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:28:52.990912    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:28:53.032892    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:28:53.032904    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:28:53.048125    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:28:53.048134    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:28:53.062660    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:28:53.062672    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:28:53.075804    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:28:53.075818    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:28:53.092751    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:28:53.092763    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:28:53.118439    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:28:53.118458    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:28:53.138940    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:28:53.138955    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:28:53.151406    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:28:53.151420    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:28:53.164139    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:28:53.164151    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:28:53.200975    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:28:53.200990    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:28:53.213166    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:28:53.213174    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:28:53.225552    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:28:53.225566    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:28:55.747845    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:29:00.750624    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:29:00.751215    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 17:29:00.791673    4508 logs.go:276] 1 containers: [a2fb4de8ca39]
	I0923 17:29:00.791836    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 17:29:00.813308    4508 logs.go:276] 1 containers: [1f9b9ba09b4b]
	I0923 17:29:00.813408    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 17:29:00.829858    4508 logs.go:276] 4 containers: [307e959f4aa1 7aaa0ab9d2e6 5e60256ac43b aaa92bcb160c]
	I0923 17:29:00.829963    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 17:29:00.842191    4508 logs.go:276] 1 containers: [705b157f31c3]
	I0923 17:29:00.842280    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 17:29:00.853322    4508 logs.go:276] 1 containers: [b74f46c74d96]
	I0923 17:29:00.853405    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 17:29:00.867503    4508 logs.go:276] 1 containers: [b6b1da77d7d1]
	I0923 17:29:00.867590    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 17:29:00.878605    4508 logs.go:276] 0 containers: []
	W0923 17:29:00.878617    4508 logs.go:278] No container was found matching "kindnet"
	I0923 17:29:00.878690    4508 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 17:29:00.894205    4508 logs.go:276] 1 containers: [297e5a3d5a8a]
	I0923 17:29:00.894225    4508 logs.go:123] Gathering logs for kubelet ...
	I0923 17:29:00.894230    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 17:29:00.928591    4508 logs.go:123] Gathering logs for describe nodes ...
	I0923 17:29:00.928603    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 17:29:00.962588    4508 logs.go:123] Gathering logs for coredns [307e959f4aa1] ...
	I0923 17:29:00.962600    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 307e959f4aa1"
	I0923 17:29:00.974470    4508 logs.go:123] Gathering logs for kube-scheduler [705b157f31c3] ...
	I0923 17:29:00.974487    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 705b157f31c3"
	I0923 17:29:00.989417    4508 logs.go:123] Gathering logs for kube-apiserver [a2fb4de8ca39] ...
	I0923 17:29:00.989433    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fb4de8ca39"
	I0923 17:29:01.003694    4508 logs.go:123] Gathering logs for coredns [aaa92bcb160c] ...
	I0923 17:29:01.003704    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaa92bcb160c"
	I0923 17:29:01.015209    4508 logs.go:123] Gathering logs for kube-proxy [b74f46c74d96] ...
	I0923 17:29:01.015224    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b74f46c74d96"
	I0923 17:29:01.027329    4508 logs.go:123] Gathering logs for storage-provisioner [297e5a3d5a8a] ...
	I0923 17:29:01.027345    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 297e5a3d5a8a"
	I0923 17:29:01.039015    4508 logs.go:123] Gathering logs for Docker ...
	I0923 17:29:01.039030    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 17:29:01.063851    4508 logs.go:123] Gathering logs for dmesg ...
	I0923 17:29:01.063859    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 17:29:01.068419    4508 logs.go:123] Gathering logs for etcd [1f9b9ba09b4b] ...
	I0923 17:29:01.068425    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f9b9ba09b4b"
	I0923 17:29:01.087384    4508 logs.go:123] Gathering logs for coredns [7aaa0ab9d2e6] ...
	I0923 17:29:01.087394    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7aaa0ab9d2e6"
	I0923 17:29:01.099200    4508 logs.go:123] Gathering logs for coredns [5e60256ac43b] ...
	I0923 17:29:01.099214    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e60256ac43b"
	I0923 17:29:01.111237    4508 logs.go:123] Gathering logs for kube-controller-manager [b6b1da77d7d1] ...
	I0923 17:29:01.111247    4508 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6b1da77d7d1"
	I0923 17:29:01.128260    4508 logs.go:123] Gathering logs for container status ...
	I0923 17:29:01.128271    4508 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 17:29:03.641990    4508 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 17:29:08.643279    4508 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 17:29:08.647345    4508 out.go:201] 
	W0923 17:29:08.650429    4508 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0923 17:29:08.650448    4508 out.go:270] * 
	* 
	W0923 17:29:08.651814    4508 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:29:08.661349    4508 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-180000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.41s)

                                                
                                    
x
+
TestPause/serial/Start (9.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-354000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-354000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.929634458s)

                                                
                                                
-- stdout --
	* [pause-354000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-354000" primary control-plane node in "pause-354000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-354000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-354000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-354000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-354000 -n pause-354000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-354000 -n pause-354000: exit status 7 (51.232584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-354000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-629000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-629000 --driver=qemu2 : exit status 80 (9.82275175s)

                                                
                                                
-- stdout --
	* [NoKubernetes-629000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-629000" primary control-plane node in "NoKubernetes-629000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-629000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-629000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-629000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-629000 -n NoKubernetes-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-629000 -n NoKubernetes-629000: exit status 7 (36.08975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-629000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-629000 --no-kubernetes --driver=qemu2 : exit status 80 (5.247834792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-629000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-629000
	* Restarting existing qemu2 VM for "NoKubernetes-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-629000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-629000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-629000 -n NoKubernetes-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-629000 -n NoKubernetes-629000: exit status 7 (62.02625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-629000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-629000 --no-kubernetes --driver=qemu2 : exit status 80 (5.250155708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-629000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-629000
	* Restarting existing qemu2 VM for "NoKubernetes-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-629000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-629000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-629000 -n NoKubernetes-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-629000 -n NoKubernetes-629000: exit status 7 (50.587667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-629000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-629000 --driver=qemu2 : exit status 80 (5.263089625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-629000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-629000
	* Restarting existing qemu2 VM for "NoKubernetes-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-629000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-629000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-629000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-629000 -n NoKubernetes-629000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-629000 -n NoKubernetes-629000: exit status 7 (65.060208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-629000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.9897205s)

                                                
                                                
-- stdout --
	* [auto-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-780000" primary control-plane node in "auto-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:27:29.189450    4748 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:27:29.189594    4748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:27:29.189597    4748 out.go:358] Setting ErrFile to fd 2...
	I0923 17:27:29.189599    4748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:27:29.189724    4748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:27:29.190830    4748 out.go:352] Setting JSON to false
	I0923 17:27:29.207268    4748 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3412,"bootTime":1727134237,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:27:29.207336    4748 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:27:29.213979    4748 out.go:177] * [auto-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:27:29.221770    4748 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:27:29.221807    4748 notify.go:220] Checking for updates...
	I0923 17:27:29.229752    4748 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:27:29.232794    4748 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:27:29.235824    4748 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:27:29.238749    4748 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:27:29.241752    4748 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:27:29.244988    4748 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:27:29.245059    4748 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:27:29.245102    4748 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:27:29.249759    4748 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:27:29.255755    4748 start.go:297] selected driver: qemu2
	I0923 17:27:29.255762    4748 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:27:29.255770    4748 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:27:29.258060    4748 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:27:29.260742    4748 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:27:29.263858    4748 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:27:29.263877    4748 cni.go:84] Creating CNI manager for ""
	I0923 17:27:29.263898    4748 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:27:29.263908    4748 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:27:29.263954    4748 start.go:340] cluster config:
	{Name:auto-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:27:29.267644    4748 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:27:29.275794    4748 out.go:177] * Starting "auto-780000" primary control-plane node in "auto-780000" cluster
	I0923 17:27:29.279794    4748 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:27:29.279809    4748 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:27:29.279815    4748 cache.go:56] Caching tarball of preloaded images
	I0923 17:27:29.279875    4748 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:27:29.279880    4748 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:27:29.279930    4748 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/auto-780000/config.json ...
	I0923 17:27:29.279946    4748 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/auto-780000/config.json: {Name:mkff1dc5f9947267489286041ff748bbf1ebeecf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:27:29.280426    4748 start.go:360] acquireMachinesLock for auto-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:27:29.280460    4748 start.go:364] duration metric: took 28.167µs to acquireMachinesLock for "auto-780000"
	I0923 17:27:29.280472    4748 start.go:93] Provisioning new machine with config: &{Name:auto-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:27:29.280498    4748 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:27:29.284866    4748 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:27:29.302259    4748 start.go:159] libmachine.API.Create for "auto-780000" (driver="qemu2")
	I0923 17:27:29.302286    4748 client.go:168] LocalClient.Create starting
	I0923 17:27:29.302352    4748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:27:29.302397    4748 main.go:141] libmachine: Decoding PEM data...
	I0923 17:27:29.302406    4748 main.go:141] libmachine: Parsing certificate...
	I0923 17:27:29.302446    4748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:27:29.302478    4748 main.go:141] libmachine: Decoding PEM data...
	I0923 17:27:29.302486    4748 main.go:141] libmachine: Parsing certificate...
	I0923 17:27:29.302910    4748 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:27:29.468551    4748 main.go:141] libmachine: Creating SSH key...
	I0923 17:27:29.619981    4748 main.go:141] libmachine: Creating Disk image...
	I0923 17:27:29.619991    4748 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:27:29.620254    4748 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2
	I0923 17:27:29.630233    4748 main.go:141] libmachine: STDOUT: 
	I0923 17:27:29.630253    4748 main.go:141] libmachine: STDERR: 
	I0923 17:27:29.630328    4748 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2 +20000M
	I0923 17:27:29.638457    4748 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:27:29.638474    4748 main.go:141] libmachine: STDERR: 
	I0923 17:27:29.638491    4748 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2
	I0923 17:27:29.638496    4748 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:27:29.638511    4748 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:27:29.638536    4748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:cf:03:95:86:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2
	I0923 17:27:29.640121    4748 main.go:141] libmachine: STDOUT: 
	I0923 17:27:29.640134    4748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:27:29.640153    4748 client.go:171] duration metric: took 337.862166ms to LocalClient.Create
	I0923 17:27:31.642336    4748 start.go:128] duration metric: took 2.361826083s to createHost
	I0923 17:27:31.642403    4748 start.go:83] releasing machines lock for "auto-780000", held for 2.361952s
	W0923 17:27:31.642513    4748 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:27:31.657278    4748 out.go:177] * Deleting "auto-780000" in qemu2 ...
	W0923 17:27:31.685770    4748 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:27:31.685795    4748 start.go:729] Will try again in 5 seconds ...
	I0923 17:27:36.688061    4748 start.go:360] acquireMachinesLock for auto-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:27:36.688682    4748 start.go:364] duration metric: took 507.791µs to acquireMachinesLock for "auto-780000"
	I0923 17:27:36.688760    4748 start.go:93] Provisioning new machine with config: &{Name:auto-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:27:36.689088    4748 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:27:36.700801    4748 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:27:36.750357    4748 start.go:159] libmachine.API.Create for "auto-780000" (driver="qemu2")
	I0923 17:27:36.750437    4748 client.go:168] LocalClient.Create starting
	I0923 17:27:36.750599    4748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:27:36.750677    4748 main.go:141] libmachine: Decoding PEM data...
	I0923 17:27:36.750694    4748 main.go:141] libmachine: Parsing certificate...
	I0923 17:27:36.750763    4748 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:27:36.750812    4748 main.go:141] libmachine: Decoding PEM data...
	I0923 17:27:36.750828    4748 main.go:141] libmachine: Parsing certificate...
	I0923 17:27:36.751513    4748 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:27:36.920006    4748 main.go:141] libmachine: Creating SSH key...
	I0923 17:27:37.084362    4748 main.go:141] libmachine: Creating Disk image...
	I0923 17:27:37.084384    4748 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:27:37.084668    4748 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2
	I0923 17:27:37.094271    4748 main.go:141] libmachine: STDOUT: 
	I0923 17:27:37.094288    4748 main.go:141] libmachine: STDERR: 
	I0923 17:27:37.094341    4748 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2 +20000M
	I0923 17:27:37.102481    4748 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:27:37.102497    4748 main.go:141] libmachine: STDERR: 
	I0923 17:27:37.102517    4748 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2
	I0923 17:27:37.102524    4748 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:27:37.102534    4748 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:27:37.102567    4748 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:35:a0:fd:25:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/auto-780000/disk.qcow2
	I0923 17:27:37.104210    4748 main.go:141] libmachine: STDOUT: 
	I0923 17:27:37.104222    4748 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:27:37.104238    4748 client.go:171] duration metric: took 353.786208ms to LocalClient.Create
	I0923 17:27:39.106555    4748 start.go:128] duration metric: took 2.417409541s to createHost
	I0923 17:27:39.106619    4748 start.go:83] releasing machines lock for "auto-780000", held for 2.417929875s
	W0923 17:27:39.107054    4748 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:27:39.117793    4748 out.go:201] 
	W0923 17:27:39.125946    4748 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:27:39.125989    4748 out.go:270] * 
	* 
	W0923 17:27:39.128108    4748 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:27:39.137905    4748 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.026963208s)

                                                
                                                
-- stdout --
	* [kindnet-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-780000" primary control-plane node in "kindnet-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:27:41.318596    4860 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:27:41.318726    4860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:27:41.318730    4860 out.go:358] Setting ErrFile to fd 2...
	I0923 17:27:41.318732    4860 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:27:41.318870    4860 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:27:41.319946    4860 out.go:352] Setting JSON to false
	I0923 17:27:41.336309    4860 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3424,"bootTime":1727134237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:27:41.336378    4860 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:27:41.343446    4860 out.go:177] * [kindnet-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:27:41.351245    4860 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:27:41.351295    4860 notify.go:220] Checking for updates...
	I0923 17:27:41.359268    4860 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:27:41.362269    4860 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:27:41.365349    4860 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:27:41.368212    4860 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:27:41.371256    4860 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:27:41.374648    4860 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:27:41.374709    4860 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:27:41.374756    4860 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:27:41.379266    4860 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:27:41.386330    4860 start.go:297] selected driver: qemu2
	I0923 17:27:41.386336    4860 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:27:41.386346    4860 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:27:41.388589    4860 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:27:41.391275    4860 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:27:41.394368    4860 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:27:41.394397    4860 cni.go:84] Creating CNI manager for "kindnet"
	I0923 17:27:41.394401    4860 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 17:27:41.394438    4860 start.go:340] cluster config:
	{Name:kindnet-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:27:41.397989    4860 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:27:41.403273    4860 out.go:177] * Starting "kindnet-780000" primary control-plane node in "kindnet-780000" cluster
	I0923 17:27:41.407247    4860 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:27:41.407264    4860 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:27:41.407275    4860 cache.go:56] Caching tarball of preloaded images
	I0923 17:27:41.407351    4860 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:27:41.407357    4860 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:27:41.407416    4860 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/kindnet-780000/config.json ...
	I0923 17:27:41.407429    4860 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/kindnet-780000/config.json: {Name:mk64434d2f9fefcbb3b5c48f4fd4fd8775a2ec5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:27:41.407898    4860 start.go:360] acquireMachinesLock for kindnet-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:27:41.407941    4860 start.go:364] duration metric: took 35.375µs to acquireMachinesLock for "kindnet-780000"
	I0923 17:27:41.407955    4860 start.go:93] Provisioning new machine with config: &{Name:kindnet-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:27:41.407982    4860 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:27:41.416253    4860 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:27:41.433474    4860 start.go:159] libmachine.API.Create for "kindnet-780000" (driver="qemu2")
	I0923 17:27:41.433509    4860 client.go:168] LocalClient.Create starting
	I0923 17:27:41.433578    4860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:27:41.433606    4860 main.go:141] libmachine: Decoding PEM data...
	I0923 17:27:41.433617    4860 main.go:141] libmachine: Parsing certificate...
	I0923 17:27:41.433656    4860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:27:41.433679    4860 main.go:141] libmachine: Decoding PEM data...
	I0923 17:27:41.433687    4860 main.go:141] libmachine: Parsing certificate...
	I0923 17:27:41.434195    4860 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:27:41.594419    4860 main.go:141] libmachine: Creating SSH key...
	I0923 17:27:41.867764    4860 main.go:141] libmachine: Creating Disk image...
	I0923 17:27:41.867780    4860 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:27:41.868087    4860 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2
	I0923 17:27:41.878219    4860 main.go:141] libmachine: STDOUT: 
	I0923 17:27:41.878247    4860 main.go:141] libmachine: STDERR: 
	I0923 17:27:41.878321    4860 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2 +20000M
	I0923 17:27:41.887415    4860 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:27:41.887443    4860 main.go:141] libmachine: STDERR: 
	I0923 17:27:41.887463    4860 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2
	I0923 17:27:41.887469    4860 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:27:41.887480    4860 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:27:41.887518    4860 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:c9:7f:00:1e:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2
	I0923 17:27:41.889229    4860 main.go:141] libmachine: STDOUT: 
	I0923 17:27:41.889243    4860 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:27:41.889264    4860 client.go:171] duration metric: took 455.751833ms to LocalClient.Create
	I0923 17:27:43.891454    4860 start.go:128] duration metric: took 2.483456583s to createHost
	I0923 17:27:43.891544    4860 start.go:83] releasing machines lock for "kindnet-780000", held for 2.483609792s
	W0923 17:27:43.891621    4860 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:27:43.902935    4860 out.go:177] * Deleting "kindnet-780000" in qemu2 ...
	W0923 17:27:43.939982    4860 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:27:43.940014    4860 start.go:729] Will try again in 5 seconds ...
	I0923 17:27:48.941412    4860 start.go:360] acquireMachinesLock for kindnet-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:27:48.942021    4860 start.go:364] duration metric: took 493.75µs to acquireMachinesLock for "kindnet-780000"
	I0923 17:27:48.942175    4860 start.go:93] Provisioning new machine with config: &{Name:kindnet-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:27:48.942470    4860 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:27:48.950209    4860 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:27:49.000690    4860 start.go:159] libmachine.API.Create for "kindnet-780000" (driver="qemu2")
	I0923 17:27:49.000743    4860 client.go:168] LocalClient.Create starting
	I0923 17:27:49.000875    4860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:27:49.000936    4860 main.go:141] libmachine: Decoding PEM data...
	I0923 17:27:49.000954    4860 main.go:141] libmachine: Parsing certificate...
	I0923 17:27:49.001024    4860 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:27:49.001070    4860 main.go:141] libmachine: Decoding PEM data...
	I0923 17:27:49.001085    4860 main.go:141] libmachine: Parsing certificate...
	I0923 17:27:49.001829    4860 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:27:49.171587    4860 main.go:141] libmachine: Creating SSH key...
	I0923 17:27:49.248550    4860 main.go:141] libmachine: Creating Disk image...
	I0923 17:27:49.248558    4860 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:27:49.248792    4860 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2
	I0923 17:27:49.258229    4860 main.go:141] libmachine: STDOUT: 
	I0923 17:27:49.258249    4860 main.go:141] libmachine: STDERR: 
	I0923 17:27:49.258321    4860 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2 +20000M
	I0923 17:27:49.266310    4860 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:27:49.266326    4860 main.go:141] libmachine: STDERR: 
	I0923 17:27:49.266345    4860 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2
	I0923 17:27:49.266351    4860 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:27:49.266359    4860 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:27:49.266393    4860 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:28:1b:fd:33:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kindnet-780000/disk.qcow2
	I0923 17:27:49.268040    4860 main.go:141] libmachine: STDOUT: 
	I0923 17:27:49.268053    4860 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:27:49.268065    4860 client.go:171] duration metric: took 267.31925ms to LocalClient.Create
	I0923 17:27:51.270231    4860 start.go:128] duration metric: took 2.327742375s to createHost
	I0923 17:27:51.270308    4860 start.go:83] releasing machines lock for "kindnet-780000", held for 2.328280792s
	W0923 17:27:51.270728    4860 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:27:51.287507    4860 out.go:201] 
	W0923 17:27:51.292270    4860 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:27:51.292293    4860 out.go:270] * 
	* 
	W0923 17:27:51.293874    4860 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:27:51.305419    4860 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.948218458s)

                                                
                                                
-- stdout --
	* [calico-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-780000" primary control-plane node in "calico-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:27:53.565334    4976 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:27:53.565487    4976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:27:53.565490    4976 out.go:358] Setting ErrFile to fd 2...
	I0923 17:27:53.565493    4976 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:27:53.565648    4976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:27:53.566742    4976 out.go:352] Setting JSON to false
	I0923 17:27:53.583245    4976 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3436,"bootTime":1727134237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:27:53.583307    4976 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:27:53.590681    4976 out.go:177] * [calico-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:27:53.599416    4976 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:27:53.599447    4976 notify.go:220] Checking for updates...
	I0923 17:27:53.605098    4976 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:27:53.611507    4976 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:27:53.614460    4976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:27:53.616041    4976 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:27:53.619449    4976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:27:53.622727    4976 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:27:53.622794    4976 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:27:53.622851    4976 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:27:53.626254    4976 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:27:53.633509    4976 start.go:297] selected driver: qemu2
	I0923 17:27:53.633515    4976 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:27:53.633523    4976 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:27:53.635950    4976 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:27:53.639469    4976 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:27:53.642545    4976 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:27:53.642563    4976 cni.go:84] Creating CNI manager for "calico"
	I0923 17:27:53.642567    4976 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0923 17:27:53.642593    4976 start.go:340] cluster config:
	{Name:calico-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:27:53.646318    4976 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:27:53.653440    4976 out.go:177] * Starting "calico-780000" primary control-plane node in "calico-780000" cluster
	I0923 17:27:53.657458    4976 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:27:53.657474    4976 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:27:53.657489    4976 cache.go:56] Caching tarball of preloaded images
	I0923 17:27:53.657544    4976 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:27:53.657550    4976 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:27:53.657608    4976 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/calico-780000/config.json ...
	I0923 17:27:53.657625    4976 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/calico-780000/config.json: {Name:mkdcc836fbe300f2753f07684462d461b9e4019c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:27:53.658100    4976 start.go:360] acquireMachinesLock for calico-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:27:53.658133    4976 start.go:364] duration metric: took 27.541µs to acquireMachinesLock for "calico-780000"
	I0923 17:27:53.658146    4976 start.go:93] Provisioning new machine with config: &{Name:calico-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:27:53.658174    4976 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:27:53.661425    4976 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:27:53.678134    4976 start.go:159] libmachine.API.Create for "calico-780000" (driver="qemu2")
	I0923 17:27:53.678175    4976 client.go:168] LocalClient.Create starting
	I0923 17:27:53.678245    4976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:27:53.678278    4976 main.go:141] libmachine: Decoding PEM data...
	I0923 17:27:53.678287    4976 main.go:141] libmachine: Parsing certificate...
	I0923 17:27:53.678329    4976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:27:53.678352    4976 main.go:141] libmachine: Decoding PEM data...
	I0923 17:27:53.678361    4976 main.go:141] libmachine: Parsing certificate...
	I0923 17:27:53.678800    4976 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:27:53.848307    4976 main.go:141] libmachine: Creating SSH key...
	I0923 17:27:53.991951    4976 main.go:141] libmachine: Creating Disk image...
	I0923 17:27:53.991959    4976 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:27:53.992197    4976 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2
	I0923 17:27:54.002240    4976 main.go:141] libmachine: STDOUT: 
	I0923 17:27:54.002265    4976 main.go:141] libmachine: STDERR: 
	I0923 17:27:54.002338    4976 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2 +20000M
	I0923 17:27:54.010916    4976 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:27:54.010934    4976 main.go:141] libmachine: STDERR: 
	I0923 17:27:54.010951    4976 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2
	I0923 17:27:54.010957    4976 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:27:54.010970    4976 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:27:54.011004    4976 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:9d:7e:7d:83:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2
	I0923 17:27:54.012713    4976 main.go:141] libmachine: STDOUT: 
	I0923 17:27:54.012728    4976 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:27:54.012748    4976 client.go:171] duration metric: took 334.569166ms to LocalClient.Create
	I0923 17:27:56.014946    4976 start.go:128] duration metric: took 2.356763167s to createHost
	I0923 17:27:56.015026    4976 start.go:83] releasing machines lock for "calico-780000", held for 2.356898958s
	W0923 17:27:56.015107    4976 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:27:56.026160    4976 out.go:177] * Deleting "calico-780000" in qemu2 ...
	W0923 17:27:56.054783    4976 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:27:56.054808    4976 start.go:729] Will try again in 5 seconds ...
	I0923 17:28:01.055458    4976 start.go:360] acquireMachinesLock for calico-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:28:01.056029    4976 start.go:364] duration metric: took 450.75µs to acquireMachinesLock for "calico-780000"
	I0923 17:28:01.056107    4976 start.go:93] Provisioning new machine with config: &{Name:calico-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:28:01.056424    4976 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:28:01.069015    4976 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:28:01.116969    4976 start.go:159] libmachine.API.Create for "calico-780000" (driver="qemu2")
	I0923 17:28:01.117034    4976 client.go:168] LocalClient.Create starting
	I0923 17:28:01.117143    4976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:28:01.117196    4976 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:01.117209    4976 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:01.117284    4976 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:28:01.117325    4976 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:01.117336    4976 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:01.117971    4976 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:28:01.288631    4976 main.go:141] libmachine: Creating SSH key...
	I0923 17:28:01.421477    4976 main.go:141] libmachine: Creating Disk image...
	I0923 17:28:01.421488    4976 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:28:01.421726    4976 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2
	I0923 17:28:01.430968    4976 main.go:141] libmachine: STDOUT: 
	I0923 17:28:01.430984    4976 main.go:141] libmachine: STDERR: 
	I0923 17:28:01.431051    4976 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2 +20000M
	I0923 17:28:01.438922    4976 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:28:01.438938    4976 main.go:141] libmachine: STDERR: 
	I0923 17:28:01.438951    4976 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2
	I0923 17:28:01.438955    4976 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:28:01.438965    4976 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:28:01.438990    4976 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:39:ad:a7:e9:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/calico-780000/disk.qcow2
	I0923 17:28:01.440609    4976 main.go:141] libmachine: STDOUT: 
	I0923 17:28:01.440623    4976 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:28:01.440636    4976 client.go:171] duration metric: took 323.597292ms to LocalClient.Create
	I0923 17:28:03.442832    4976 start.go:128] duration metric: took 2.386393209s to createHost
	I0923 17:28:03.442908    4976 start.go:83] releasing machines lock for "calico-780000", held for 2.386871209s
	W0923 17:28:03.443288    4976 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:03.451875    4976 out.go:201] 
	W0923 17:28:03.460334    4976 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:28:03.460381    4976 out.go:270] * 
	* 
	W0923 17:28:03.462572    4976 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:28:03.472747    4976 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (10s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.99656025s)

                                                
                                                
-- stdout --
	* [custom-flannel-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-780000" primary control-plane node in "custom-flannel-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:28:05.944694    5094 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:28:05.944814    5094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:28:05.944817    5094 out.go:358] Setting ErrFile to fd 2...
	I0923 17:28:05.944820    5094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:28:05.944962    5094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:28:05.946140    5094 out.go:352] Setting JSON to false
	I0923 17:28:05.962119    5094 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3448,"bootTime":1727134237,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:28:05.962197    5094 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:28:05.968363    5094 out.go:177] * [custom-flannel-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:28:05.975298    5094 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:28:05.975339    5094 notify.go:220] Checking for updates...
	I0923 17:28:05.981305    5094 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:28:05.984242    5094 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:28:05.985785    5094 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:28:05.989283    5094 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:28:05.992253    5094 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:28:05.995647    5094 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:28:05.995723    5094 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:28:05.995776    5094 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:28:05.999207    5094 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:28:06.006251    5094 start.go:297] selected driver: qemu2
	I0923 17:28:06.006258    5094 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:28:06.006264    5094 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:28:06.008530    5094 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:28:06.011285    5094 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:28:06.014336    5094 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:28:06.014354    5094 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0923 17:28:06.014362    5094 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0923 17:28:06.014391    5094 start.go:340] cluster config:
	{Name:custom-flannel-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:28:06.017937    5094 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:28:06.025297    5094 out.go:177] * Starting "custom-flannel-780000" primary control-plane node in "custom-flannel-780000" cluster
	I0923 17:28:06.029243    5094 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:28:06.029256    5094 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:28:06.029266    5094 cache.go:56] Caching tarball of preloaded images
	I0923 17:28:06.029326    5094 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:28:06.029332    5094 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:28:06.029374    5094 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/custom-flannel-780000/config.json ...
	I0923 17:28:06.029384    5094 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/custom-flannel-780000/config.json: {Name:mk936b4e3164ad27f32f52f77b0b0a102ed7d121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:28:06.029724    5094 start.go:360] acquireMachinesLock for custom-flannel-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:28:06.029756    5094 start.go:364] duration metric: took 26.375µs to acquireMachinesLock for "custom-flannel-780000"
	I0923 17:28:06.029769    5094 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:28:06.029811    5094 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:28:06.040223    5094 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:28:06.055684    5094 start.go:159] libmachine.API.Create for "custom-flannel-780000" (driver="qemu2")
	I0923 17:28:06.055725    5094 client.go:168] LocalClient.Create starting
	I0923 17:28:06.055796    5094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:28:06.055828    5094 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:06.055837    5094 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:06.055895    5094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:28:06.055919    5094 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:06.055925    5094 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:06.056296    5094 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:28:06.214753    5094 main.go:141] libmachine: Creating SSH key...
	I0923 17:28:06.372077    5094 main.go:141] libmachine: Creating Disk image...
	I0923 17:28:06.372091    5094 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:28:06.372325    5094 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2
	I0923 17:28:06.381963    5094 main.go:141] libmachine: STDOUT: 
	I0923 17:28:06.381983    5094 main.go:141] libmachine: STDERR: 
	I0923 17:28:06.382054    5094 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2 +20000M
	I0923 17:28:06.390107    5094 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:28:06.390125    5094 main.go:141] libmachine: STDERR: 
	I0923 17:28:06.390139    5094 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2
	I0923 17:28:06.390145    5094 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:28:06.390157    5094 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:28:06.390184    5094 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:44:70:5b:e7:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2
	I0923 17:28:06.391906    5094 main.go:141] libmachine: STDOUT: 
	I0923 17:28:06.391921    5094 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:28:06.391942    5094 client.go:171] duration metric: took 336.212625ms to LocalClient.Create
	I0923 17:28:08.394093    5094 start.go:128] duration metric: took 2.364270958s to createHost
	I0923 17:28:08.394211    5094 start.go:83] releasing machines lock for "custom-flannel-780000", held for 2.364464583s
	W0923 17:28:08.394272    5094 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:08.403779    5094 out.go:177] * Deleting "custom-flannel-780000" in qemu2 ...
	W0923 17:28:08.435559    5094 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:08.435572    5094 start.go:729] Will try again in 5 seconds ...
	I0923 17:28:13.437799    5094 start.go:360] acquireMachinesLock for custom-flannel-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:28:13.438229    5094 start.go:364] duration metric: took 329.792µs to acquireMachinesLock for "custom-flannel-780000"
	I0923 17:28:13.438352    5094 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:28:13.438562    5094 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:28:13.444248    5094 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:28:13.487127    5094 start.go:159] libmachine.API.Create for "custom-flannel-780000" (driver="qemu2")
	I0923 17:28:13.487165    5094 client.go:168] LocalClient.Create starting
	I0923 17:28:13.487275    5094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:28:13.487343    5094 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:13.487369    5094 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:13.487420    5094 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:28:13.487459    5094 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:13.487474    5094 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:13.487933    5094 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:28:13.656499    5094 main.go:141] libmachine: Creating SSH key...
	I0923 17:28:13.834503    5094 main.go:141] libmachine: Creating Disk image...
	I0923 17:28:13.834517    5094 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:28:13.834782    5094 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2
	I0923 17:28:13.845783    5094 main.go:141] libmachine: STDOUT: 
	I0923 17:28:13.845814    5094 main.go:141] libmachine: STDERR: 
	I0923 17:28:13.845893    5094 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2 +20000M
	I0923 17:28:13.854677    5094 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:28:13.854692    5094 main.go:141] libmachine: STDERR: 
	I0923 17:28:13.854715    5094 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2
	I0923 17:28:13.854720    5094 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:28:13.854728    5094 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:28:13.854760    5094 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:92:d7:f0:2d:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/custom-flannel-780000/disk.qcow2
	I0923 17:28:13.856563    5094 main.go:141] libmachine: STDOUT: 
	I0923 17:28:13.856580    5094 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:28:13.856598    5094 client.go:171] duration metric: took 369.427666ms to LocalClient.Create
	I0923 17:28:15.858874    5094 start.go:128] duration metric: took 2.4202945s to createHost
	I0923 17:28:15.858964    5094 start.go:83] releasing machines lock for "custom-flannel-780000", held for 2.420732084s
	W0923 17:28:15.859369    5094 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:15.875020    5094 out.go:201] 
	W0923 17:28:15.879227    5094 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:28:15.879300    5094 out.go:270] * 
	* 
	W0923 17:28:15.881851    5094 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:28:15.897136    5094 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (10.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.763494417s)

                                                
                                                
-- stdout --
	* [false-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-780000" primary control-plane node in "false-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:28:18.310038    5211 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:28:18.310192    5211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:28:18.310195    5211 out.go:358] Setting ErrFile to fd 2...
	I0923 17:28:18.310198    5211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:28:18.310313    5211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:28:18.311474    5211 out.go:352] Setting JSON to false
	I0923 17:28:18.328358    5211 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3461,"bootTime":1727134237,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:28:18.328467    5211 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:28:18.333859    5211 out.go:177] * [false-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:28:18.342531    5211 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:28:18.342568    5211 notify.go:220] Checking for updates...
	I0923 17:28:18.348529    5211 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:28:18.351573    5211 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:28:18.353038    5211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:28:18.356537    5211 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:28:18.359546    5211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:28:18.362954    5211 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:28:18.363021    5211 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:28:18.363077    5211 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:28:18.367494    5211 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:28:18.374536    5211 start.go:297] selected driver: qemu2
	I0923 17:28:18.374542    5211 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:28:18.374548    5211 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:28:18.376742    5211 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:28:18.380554    5211 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:28:18.383650    5211 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:28:18.383676    5211 cni.go:84] Creating CNI manager for "false"
	I0923 17:28:18.383709    5211 start.go:340] cluster config:
	{Name:false-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:28:18.387255    5211 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:28:18.394562    5211 out.go:177] * Starting "false-780000" primary control-plane node in "false-780000" cluster
	I0923 17:28:18.398491    5211 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:28:18.398504    5211 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:28:18.398512    5211 cache.go:56] Caching tarball of preloaded images
	I0923 17:28:18.398570    5211 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:28:18.398576    5211 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:28:18.398626    5211 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/false-780000/config.json ...
	I0923 17:28:18.398637    5211 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/false-780000/config.json: {Name:mkc00166e50c4f5f7a87351bdc8df8d4828ce7e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:28:18.398875    5211 start.go:360] acquireMachinesLock for false-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:28:18.398909    5211 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "false-780000"
	I0923 17:28:18.398923    5211 start.go:93] Provisioning new machine with config: &{Name:false-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:28:18.398964    5211 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:28:18.407553    5211 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:28:18.424946    5211 start.go:159] libmachine.API.Create for "false-780000" (driver="qemu2")
	I0923 17:28:18.424985    5211 client.go:168] LocalClient.Create starting
	I0923 17:28:18.425054    5211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:28:18.425083    5211 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:18.425093    5211 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:18.425133    5211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:28:18.425160    5211 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:18.425169    5211 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:18.425502    5211 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:28:18.587524    5211 main.go:141] libmachine: Creating SSH key...
	I0923 17:28:18.633714    5211 main.go:141] libmachine: Creating Disk image...
	I0923 17:28:18.633720    5211 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:28:18.633947    5211 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2
	I0923 17:28:18.643205    5211 main.go:141] libmachine: STDOUT: 
	I0923 17:28:18.643227    5211 main.go:141] libmachine: STDERR: 
	I0923 17:28:18.643299    5211 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2 +20000M
	I0923 17:28:18.651186    5211 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:28:18.651202    5211 main.go:141] libmachine: STDERR: 
	I0923 17:28:18.651217    5211 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2
	I0923 17:28:18.651223    5211 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:28:18.651235    5211 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:28:18.651260    5211 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:6f:6c:c9:3a:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2
	I0923 17:28:18.652900    5211 main.go:141] libmachine: STDOUT: 
	I0923 17:28:18.652918    5211 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:28:18.652940    5211 client.go:171] duration metric: took 227.950541ms to LocalClient.Create
	I0923 17:28:20.654603    5211 start.go:128] duration metric: took 2.2556415s to createHost
	I0923 17:28:20.654637    5211 start.go:83] releasing machines lock for "false-780000", held for 2.255737667s
	W0923 17:28:20.654685    5211 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:20.676968    5211 out.go:177] * Deleting "false-780000" in qemu2 ...
	W0923 17:28:20.705655    5211 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:20.705668    5211 start.go:729] Will try again in 5 seconds ...
	I0923 17:28:25.707839    5211 start.go:360] acquireMachinesLock for false-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:28:25.708161    5211 start.go:364] duration metric: took 257.042µs to acquireMachinesLock for "false-780000"
	I0923 17:28:25.708204    5211 start.go:93] Provisioning new machine with config: &{Name:false-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:28:25.708372    5211 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:28:25.715815    5211 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:28:25.753154    5211 start.go:159] libmachine.API.Create for "false-780000" (driver="qemu2")
	I0923 17:28:25.753216    5211 client.go:168] LocalClient.Create starting
	I0923 17:28:25.753335    5211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:28:25.753402    5211 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:25.753422    5211 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:25.753482    5211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:28:25.753524    5211 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:25.753538    5211 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:25.754143    5211 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:28:25.921640    5211 main.go:141] libmachine: Creating SSH key...
	I0923 17:28:25.984570    5211 main.go:141] libmachine: Creating Disk image...
	I0923 17:28:25.984576    5211 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:28:25.984796    5211 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2
	I0923 17:28:25.994400    5211 main.go:141] libmachine: STDOUT: 
	I0923 17:28:25.994421    5211 main.go:141] libmachine: STDERR: 
	I0923 17:28:25.994476    5211 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2 +20000M
	I0923 17:28:26.002384    5211 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:28:26.002401    5211 main.go:141] libmachine: STDERR: 
	I0923 17:28:26.002414    5211 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2
	I0923 17:28:26.002421    5211 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:28:26.002428    5211 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:28:26.002459    5211 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:fa:9f:12:53:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/false-780000/disk.qcow2
	I0923 17:28:26.004096    5211 main.go:141] libmachine: STDOUT: 
	I0923 17:28:26.004111    5211 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:28:26.004123    5211 client.go:171] duration metric: took 250.903875ms to LocalClient.Create
	I0923 17:28:28.006217    5211 start.go:128] duration metric: took 2.297840375s to createHost
	I0923 17:28:28.006252    5211 start.go:83] releasing machines lock for "false-780000", held for 2.29809275s
	W0923 17:28:28.006432    5211 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:28.015790    5211 out.go:201] 
	W0923 17:28:28.025950    5211 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:28:28.025973    5211 out.go:270] * 
	* 
	W0923 17:28:28.027051    5211 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:28:28.035838    5211 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.789845875s)

                                                
                                                
-- stdout --
	* [enable-default-cni-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-780000" primary control-plane node in "enable-default-cni-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:28:30.264747    5320 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:28:30.264865    5320 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:28:30.264868    5320 out.go:358] Setting ErrFile to fd 2...
	I0923 17:28:30.264878    5320 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:28:30.265040    5320 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:28:30.266116    5320 out.go:352] Setting JSON to false
	I0923 17:28:30.282134    5320 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3473,"bootTime":1727134237,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:28:30.282202    5320 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:28:30.289890    5320 out.go:177] * [enable-default-cni-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:28:30.298722    5320 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:28:30.298805    5320 notify.go:220] Checking for updates...
	I0923 17:28:30.306659    5320 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:28:30.309590    5320 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:28:30.312671    5320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:28:30.315669    5320 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:28:30.318673    5320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:28:30.322007    5320 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:28:30.322083    5320 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:28:30.322129    5320 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:28:30.325621    5320 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:28:30.332636    5320 start.go:297] selected driver: qemu2
	I0923 17:28:30.332640    5320 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:28:30.332646    5320 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:28:30.334736    5320 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:28:30.337707    5320 out.go:177] * Automatically selected the socket_vmnet network
	E0923 17:28:30.340730    5320 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0923 17:28:30.340746    5320 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:28:30.340766    5320 cni.go:84] Creating CNI manager for "bridge"
	I0923 17:28:30.340777    5320 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:28:30.340799    5320 start.go:340] cluster config:
	{Name:enable-default-cni-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:28:30.344085    5320 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:28:30.351706    5320 out.go:177] * Starting "enable-default-cni-780000" primary control-plane node in "enable-default-cni-780000" cluster
	I0923 17:28:30.355560    5320 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:28:30.355572    5320 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:28:30.355578    5320 cache.go:56] Caching tarball of preloaded images
	I0923 17:28:30.355625    5320 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:28:30.355631    5320 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:28:30.355681    5320 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/enable-default-cni-780000/config.json ...
	I0923 17:28:30.355691    5320 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/enable-default-cni-780000/config.json: {Name:mk33d0411894a9324454b5572bfc6100ef912bdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:28:30.355993    5320 start.go:360] acquireMachinesLock for enable-default-cni-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:28:30.356024    5320 start.go:364] duration metric: took 25.125µs to acquireMachinesLock for "enable-default-cni-780000"
	I0923 17:28:30.356036    5320 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:28:30.356060    5320 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:28:30.364611    5320 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:28:30.379707    5320 start.go:159] libmachine.API.Create for "enable-default-cni-780000" (driver="qemu2")
	I0923 17:28:30.379742    5320 client.go:168] LocalClient.Create starting
	I0923 17:28:30.379809    5320 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:28:30.379841    5320 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:30.379849    5320 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:30.379885    5320 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:28:30.379909    5320 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:30.379918    5320 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:30.380257    5320 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:28:30.540863    5320 main.go:141] libmachine: Creating SSH key...
	I0923 17:28:30.612355    5320 main.go:141] libmachine: Creating Disk image...
	I0923 17:28:30.612361    5320 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:28:30.612595    5320 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2
	I0923 17:28:30.622067    5320 main.go:141] libmachine: STDOUT: 
	I0923 17:28:30.622086    5320 main.go:141] libmachine: STDERR: 
	I0923 17:28:30.622146    5320 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2 +20000M
	I0923 17:28:30.630539    5320 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:28:30.630558    5320 main.go:141] libmachine: STDERR: 
	I0923 17:28:30.630584    5320 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2
	I0923 17:28:30.630589    5320 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:28:30.630602    5320 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:28:30.630630    5320 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:48:2f:c6:99:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2
	I0923 17:28:30.632447    5320 main.go:141] libmachine: STDOUT: 
	I0923 17:28:30.632461    5320 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:28:30.632481    5320 client.go:171] duration metric: took 252.734792ms to LocalClient.Create
	I0923 17:28:32.634599    5320 start.go:128] duration metric: took 2.278540541s to createHost
	I0923 17:28:32.634643    5320 start.go:83] releasing machines lock for "enable-default-cni-780000", held for 2.278629041s
	W0923 17:28:32.634697    5320 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:32.643362    5320 out.go:177] * Deleting "enable-default-cni-780000" in qemu2 ...
	W0923 17:28:32.674182    5320 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:32.674195    5320 start.go:729] Will try again in 5 seconds ...
	I0923 17:28:37.676295    5320 start.go:360] acquireMachinesLock for enable-default-cni-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:28:37.676553    5320 start.go:364] duration metric: took 205.041µs to acquireMachinesLock for "enable-default-cni-780000"
	I0923 17:28:37.676591    5320 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:28:37.676715    5320 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:28:37.694918    5320 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:28:37.721033    5320 start.go:159] libmachine.API.Create for "enable-default-cni-780000" (driver="qemu2")
	I0923 17:28:37.721071    5320 client.go:168] LocalClient.Create starting
	I0923 17:28:37.721156    5320 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:28:37.721209    5320 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:37.721223    5320 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:37.721271    5320 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:28:37.721301    5320 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:37.721311    5320 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:37.721809    5320 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:28:37.887834    5320 main.go:141] libmachine: Creating SSH key...
	I0923 17:28:37.954163    5320 main.go:141] libmachine: Creating Disk image...
	I0923 17:28:37.954169    5320 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:28:37.954392    5320 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2
	I0923 17:28:37.963975    5320 main.go:141] libmachine: STDOUT: 
	I0923 17:28:37.963998    5320 main.go:141] libmachine: STDERR: 
	I0923 17:28:37.964058    5320 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2 +20000M
	I0923 17:28:37.972031    5320 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:28:37.972047    5320 main.go:141] libmachine: STDERR: 
	I0923 17:28:37.972060    5320 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2
	I0923 17:28:37.972064    5320 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:28:37.972071    5320 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:28:37.972095    5320 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:0d:d0:3e:d7:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/enable-default-cni-780000/disk.qcow2
	I0923 17:28:37.973830    5320 main.go:141] libmachine: STDOUT: 
	I0923 17:28:37.973844    5320 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:28:37.973858    5320 client.go:171] duration metric: took 252.781584ms to LocalClient.Create
	I0923 17:28:39.976015    5320 start.go:128] duration metric: took 2.299282375s to createHost
	I0923 17:28:39.976067    5320 start.go:83] releasing machines lock for "enable-default-cni-780000", held for 2.299516416s
	W0923 17:28:39.976518    5320 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:39.993356    5320 out.go:201] 
	W0923 17:28:39.995145    5320 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:28:39.995171    5320 out.go:270] * 
	* 
	W0923 17:28:39.997662    5320 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:28:40.012314    5320 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
E0923 17:28:42.437932    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.854423708s)

                                                
                                                
-- stdout --
	* [flannel-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-780000" primary control-plane node in "flannel-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:28:42.256502    5429 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:28:42.256633    5429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:28:42.256637    5429 out.go:358] Setting ErrFile to fd 2...
	I0923 17:28:42.256639    5429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:28:42.256778    5429 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:28:42.258689    5429 out.go:352] Setting JSON to false
	I0923 17:28:42.274823    5429 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3485,"bootTime":1727134237,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:28:42.274900    5429 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:28:42.281678    5429 out.go:177] * [flannel-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:28:42.289554    5429 notify.go:220] Checking for updates...
	I0923 17:28:42.292594    5429 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:28:42.300506    5429 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:28:42.308422    5429 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:28:42.311521    5429 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:28:42.315355    5429 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:28:42.318586    5429 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:28:42.321812    5429 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:28:42.321872    5429 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:28:42.321919    5429 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:28:42.326842    5429 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:28:42.334497    5429 start.go:297] selected driver: qemu2
	I0923 17:28:42.334502    5429 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:28:42.334507    5429 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:28:42.336549    5429 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:28:42.337907    5429 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:28:42.340508    5429 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:28:42.340525    5429 cni.go:84] Creating CNI manager for "flannel"
	I0923 17:28:42.340533    5429 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0923 17:28:42.340554    5429 start.go:340] cluster config:
	{Name:flannel-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:28:42.343617    5429 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:28:42.350491    5429 out.go:177] * Starting "flannel-780000" primary control-plane node in "flannel-780000" cluster
	I0923 17:28:42.354425    5429 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:28:42.354437    5429 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:28:42.354444    5429 cache.go:56] Caching tarball of preloaded images
	I0923 17:28:42.354494    5429 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:28:42.354499    5429 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:28:42.354552    5429 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/flannel-780000/config.json ...
	I0923 17:28:42.354562    5429 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/flannel-780000/config.json: {Name:mkbb8c8478bb214f06e8a7048764c7a7ec3a1c6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:28:42.354752    5429 start.go:360] acquireMachinesLock for flannel-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:28:42.354788    5429 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "flannel-780000"
	I0923 17:28:42.354800    5429 start.go:93] Provisioning new machine with config: &{Name:flannel-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:28:42.354826    5429 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:28:42.358458    5429 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:28:42.373667    5429 start.go:159] libmachine.API.Create for "flannel-780000" (driver="qemu2")
	I0923 17:28:42.373689    5429 client.go:168] LocalClient.Create starting
	I0923 17:28:42.373751    5429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:28:42.373783    5429 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:42.373792    5429 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:42.373837    5429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:28:42.373869    5429 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:42.373877    5429 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:42.374213    5429 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:28:42.534431    5429 main.go:141] libmachine: Creating SSH key...
	I0923 17:28:42.643179    5429 main.go:141] libmachine: Creating Disk image...
	I0923 17:28:42.643187    5429 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:28:42.643416    5429 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2
	I0923 17:28:42.652658    5429 main.go:141] libmachine: STDOUT: 
	I0923 17:28:42.652688    5429 main.go:141] libmachine: STDERR: 
	I0923 17:28:42.652752    5429 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2 +20000M
	I0923 17:28:42.660944    5429 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:28:42.661026    5429 main.go:141] libmachine: STDERR: 
	I0923 17:28:42.661045    5429 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2
	I0923 17:28:42.661050    5429 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:28:42.661068    5429 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:28:42.661094    5429 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:c4:8c:c0:63:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2
	I0923 17:28:42.662794    5429 main.go:141] libmachine: STDOUT: 
	I0923 17:28:42.662810    5429 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:28:42.662835    5429 client.go:171] duration metric: took 289.1415ms to LocalClient.Create
	I0923 17:28:44.665058    5429 start.go:128] duration metric: took 2.310211541s to createHost
	I0923 17:28:44.665137    5429 start.go:83] releasing machines lock for "flannel-780000", held for 2.310356209s
	W0923 17:28:44.665228    5429 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:44.683570    5429 out.go:177] * Deleting "flannel-780000" in qemu2 ...
	W0923 17:28:44.716502    5429 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:44.716531    5429 start.go:729] Will try again in 5 seconds ...
	I0923 17:28:49.718797    5429 start.go:360] acquireMachinesLock for flannel-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:28:49.719435    5429 start.go:364] duration metric: took 506.708µs to acquireMachinesLock for "flannel-780000"
	I0923 17:28:49.719522    5429 start.go:93] Provisioning new machine with config: &{Name:flannel-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:28:49.719802    5429 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:28:49.726422    5429 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:28:49.776632    5429 start.go:159] libmachine.API.Create for "flannel-780000" (driver="qemu2")
	I0923 17:28:49.776693    5429 client.go:168] LocalClient.Create starting
	I0923 17:28:49.776840    5429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:28:49.776913    5429 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:49.776935    5429 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:49.777011    5429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:28:49.777062    5429 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:49.777076    5429 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:49.777857    5429 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:28:49.946699    5429 main.go:141] libmachine: Creating SSH key...
	I0923 17:28:50.023093    5429 main.go:141] libmachine: Creating Disk image...
	I0923 17:28:50.023102    5429 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:28:50.023351    5429 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2
	I0923 17:28:50.032620    5429 main.go:141] libmachine: STDOUT: 
	I0923 17:28:50.032649    5429 main.go:141] libmachine: STDERR: 
	I0923 17:28:50.032705    5429 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2 +20000M
	I0923 17:28:50.040839    5429 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:28:50.040863    5429 main.go:141] libmachine: STDERR: 
	I0923 17:28:50.040877    5429 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2
	I0923 17:28:50.040882    5429 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:28:50.040902    5429 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:28:50.040935    5429 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:8d:de:4d:ac:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/flannel-780000/disk.qcow2
	I0923 17:28:50.042636    5429 main.go:141] libmachine: STDOUT: 
	I0923 17:28:50.042651    5429 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:28:50.042664    5429 client.go:171] duration metric: took 265.964292ms to LocalClient.Create
	I0923 17:28:52.044874    5429 start.go:128] duration metric: took 2.325045375s to createHost
	I0923 17:28:52.044952    5429 start.go:83] releasing machines lock for "flannel-780000", held for 2.325507584s
	W0923 17:28:52.045357    5429 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:52.055841    5429 out.go:201] 
	W0923 17:28:52.063996    5429 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:28:52.064027    5429 out.go:270] * 
	* 
	W0923 17:28:52.065783    5429 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:28:52.073828    5429 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.817716334s)

                                                
                                                
-- stdout --
	* [bridge-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-780000" primary control-plane node in "bridge-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:28:54.500127    5552 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:28:54.500267    5552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:28:54.500270    5552 out.go:358] Setting ErrFile to fd 2...
	I0923 17:28:54.500272    5552 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:28:54.500412    5552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:28:54.501480    5552 out.go:352] Setting JSON to false
	I0923 17:28:54.518155    5552 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3497,"bootTime":1727134237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:28:54.518227    5552 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:28:54.525664    5552 out.go:177] * [bridge-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:28:54.533542    5552 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:28:54.533561    5552 notify.go:220] Checking for updates...
	I0923 17:28:54.540534    5552 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:28:54.543495    5552 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:28:54.546576    5552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:28:54.549541    5552 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:28:54.552503    5552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:28:54.555829    5552 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:28:54.555899    5552 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:28:54.555955    5552 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:28:54.560578    5552 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:28:54.567533    5552 start.go:297] selected driver: qemu2
	I0923 17:28:54.567540    5552 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:28:54.567546    5552 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:28:54.569989    5552 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:28:54.572516    5552 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:28:54.575522    5552 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:28:54.575539    5552 cni.go:84] Creating CNI manager for "bridge"
	I0923 17:28:54.575543    5552 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:28:54.575573    5552 start.go:340] cluster config:
	{Name:bridge-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:28:54.579281    5552 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:28:54.586599    5552 out.go:177] * Starting "bridge-780000" primary control-plane node in "bridge-780000" cluster
	I0923 17:28:54.590554    5552 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:28:54.590571    5552 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:28:54.590580    5552 cache.go:56] Caching tarball of preloaded images
	I0923 17:28:54.590655    5552 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:28:54.590661    5552 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:28:54.590719    5552 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/bridge-780000/config.json ...
	I0923 17:28:54.590729    5552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/bridge-780000/config.json: {Name:mkbab1f91ad9664099124213995fe11563efa895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:28:54.590930    5552 start.go:360] acquireMachinesLock for bridge-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:28:54.590959    5552 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "bridge-780000"
	I0923 17:28:54.590971    5552 start.go:93] Provisioning new machine with config: &{Name:bridge-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:28:54.590993    5552 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:28:54.599515    5552 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:28:54.615615    5552 start.go:159] libmachine.API.Create for "bridge-780000" (driver="qemu2")
	I0923 17:28:54.615640    5552 client.go:168] LocalClient.Create starting
	I0923 17:28:54.615717    5552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:28:54.615752    5552 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:54.615760    5552 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:54.615797    5552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:28:54.615821    5552 main.go:141] libmachine: Decoding PEM data...
	I0923 17:28:54.615828    5552 main.go:141] libmachine: Parsing certificate...
	I0923 17:28:54.616170    5552 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:28:54.780634    5552 main.go:141] libmachine: Creating SSH key...
	I0923 17:28:54.815709    5552 main.go:141] libmachine: Creating Disk image...
	I0923 17:28:54.815720    5552 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:28:54.815950    5552 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2
	I0923 17:28:54.825413    5552 main.go:141] libmachine: STDOUT: 
	I0923 17:28:54.825438    5552 main.go:141] libmachine: STDERR: 
	I0923 17:28:54.825510    5552 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2 +20000M
	I0923 17:28:54.833786    5552 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:28:54.833802    5552 main.go:141] libmachine: STDERR: 
	I0923 17:28:54.833820    5552 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2
	I0923 17:28:54.833825    5552 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:28:54.833841    5552 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:28:54.833866    5552 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:22:df:bf:8c:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2
	I0923 17:28:54.835512    5552 main.go:141] libmachine: STDOUT: 
	I0923 17:28:54.835525    5552 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:28:54.835544    5552 client.go:171] duration metric: took 219.899417ms to LocalClient.Create
	I0923 17:28:56.837647    5552 start.go:128] duration metric: took 2.246655541s to createHost
	I0923 17:28:56.837682    5552 start.go:83] releasing machines lock for "bridge-780000", held for 2.246732417s
	W0923 17:28:56.837745    5552 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:56.852500    5552 out.go:177] * Deleting "bridge-780000" in qemu2 ...
	W0923 17:28:56.874845    5552 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:28:56.874854    5552 start.go:729] Will try again in 5 seconds ...
	I0923 17:29:01.877076    5552 start.go:360] acquireMachinesLock for bridge-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:01.877594    5552 start.go:364] duration metric: took 393.125µs to acquireMachinesLock for "bridge-780000"
	I0923 17:29:01.877666    5552 start.go:93] Provisioning new machine with config: &{Name:bridge-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:29:01.877940    5552 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:29:01.884689    5552 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:29:01.929556    5552 start.go:159] libmachine.API.Create for "bridge-780000" (driver="qemu2")
	I0923 17:29:01.929629    5552 client.go:168] LocalClient.Create starting
	I0923 17:29:01.929748    5552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:29:01.929804    5552 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:01.929817    5552 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:01.929871    5552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:29:01.929909    5552 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:01.929919    5552 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:01.930447    5552 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:29:02.103127    5552 main.go:141] libmachine: Creating SSH key...
	I0923 17:29:02.211849    5552 main.go:141] libmachine: Creating Disk image...
	I0923 17:29:02.211857    5552 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:29:02.212085    5552 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2
	I0923 17:29:02.221793    5552 main.go:141] libmachine: STDOUT: 
	I0923 17:29:02.221809    5552 main.go:141] libmachine: STDERR: 
	I0923 17:29:02.221872    5552 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2 +20000M
	I0923 17:29:02.229928    5552 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:29:02.229945    5552 main.go:141] libmachine: STDERR: 
	I0923 17:29:02.229956    5552 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2
	I0923 17:29:02.229961    5552 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:29:02.229972    5552 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:02.230004    5552 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:cb:3a:81:bd:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/bridge-780000/disk.qcow2
	I0923 17:29:02.231718    5552 main.go:141] libmachine: STDOUT: 
	I0923 17:29:02.231734    5552 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:02.231750    5552 client.go:171] duration metric: took 302.117291ms to LocalClient.Create
	I0923 17:29:04.233973    5552 start.go:128] duration metric: took 2.35600225s to createHost
	I0923 17:29:04.234055    5552 start.go:83] releasing machines lock for "bridge-780000", held for 2.35645125s
	W0923 17:29:04.234463    5552 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:04.251338    5552 out.go:201] 
	W0923 17:29:04.256370    5552 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:29:04.256406    5552 out.go:270] * 
	* 
	W0923 17:29:04.259145    5552 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:29:04.274260    5552 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-780000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.757834167s)

                                                
                                                
-- stdout --
	* [kubenet-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-780000" primary control-plane node in "kubenet-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:29:06.475604    5665 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:29:06.475741    5665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:06.475744    5665 out.go:358] Setting ErrFile to fd 2...
	I0923 17:29:06.475746    5665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:06.475891    5665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:29:06.477115    5665 out.go:352] Setting JSON to false
	I0923 17:29:06.493121    5665 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3509,"bootTime":1727134237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:29:06.493195    5665 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:29:06.499971    5665 out.go:177] * [kubenet-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:29:06.507796    5665 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:29:06.507873    5665 notify.go:220] Checking for updates...
	I0923 17:29:06.515714    5665 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:29:06.518816    5665 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:29:06.521816    5665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:29:06.524803    5665 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:29:06.527800    5665 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:29:06.531095    5665 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:29:06.531163    5665 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:29:06.531216    5665 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:29:06.535805    5665 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:29:06.541726    5665 start.go:297] selected driver: qemu2
	I0923 17:29:06.541731    5665 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:29:06.541737    5665 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:29:06.544182    5665 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:29:06.546765    5665 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:29:06.549853    5665 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:29:06.549871    5665 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0923 17:29:06.549894    5665 start.go:340] cluster config:
	{Name:kubenet-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:29:06.553334    5665 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:06.560769    5665 out.go:177] * Starting "kubenet-780000" primary control-plane node in "kubenet-780000" cluster
	I0923 17:29:06.564824    5665 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:29:06.564836    5665 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:29:06.564852    5665 cache.go:56] Caching tarball of preloaded images
	I0923 17:29:06.564910    5665 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:29:06.564918    5665 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:29:06.564982    5665 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/kubenet-780000/config.json ...
	I0923 17:29:06.564992    5665 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/kubenet-780000/config.json: {Name:mk7ba0fcaa097886d43ff943c54763947d84a451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:29:06.565316    5665 start.go:360] acquireMachinesLock for kubenet-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:06.565352    5665 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "kubenet-780000"
	I0923 17:29:06.565364    5665 start.go:93] Provisioning new machine with config: &{Name:kubenet-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:29:06.565387    5665 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:29:06.569809    5665 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:29:06.585096    5665 start.go:159] libmachine.API.Create for "kubenet-780000" (driver="qemu2")
	I0923 17:29:06.585124    5665 client.go:168] LocalClient.Create starting
	I0923 17:29:06.585185    5665 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:29:06.585214    5665 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:06.585227    5665 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:06.585266    5665 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:29:06.585289    5665 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:06.585298    5665 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:06.585631    5665 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:29:06.748274    5665 main.go:141] libmachine: Creating SSH key...
	I0923 17:29:06.816896    5665 main.go:141] libmachine: Creating Disk image...
	I0923 17:29:06.816901    5665 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:29:06.817120    5665 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2
	I0923 17:29:06.826652    5665 main.go:141] libmachine: STDOUT: 
	I0923 17:29:06.826675    5665 main.go:141] libmachine: STDERR: 
	I0923 17:29:06.826726    5665 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2 +20000M
	I0923 17:29:06.834862    5665 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:29:06.834879    5665 main.go:141] libmachine: STDERR: 
	I0923 17:29:06.834901    5665 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2
	I0923 17:29:06.834907    5665 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:29:06.834920    5665 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:06.834947    5665 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:2d:28:dc:43:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2
	I0923 17:29:06.836649    5665 main.go:141] libmachine: STDOUT: 
	I0923 17:29:06.836665    5665 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:06.836686    5665 client.go:171] duration metric: took 251.554125ms to LocalClient.Create
	I0923 17:29:08.838719    5665 start.go:128] duration metric: took 2.273344209s to createHost
	I0923 17:29:08.838732    5665 start.go:83] releasing machines lock for "kubenet-780000", held for 2.273392292s
	W0923 17:29:08.838746    5665 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:08.852403    5665 out.go:177] * Deleting "kubenet-780000" in qemu2 ...
	W0923 17:29:08.870259    5665 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:08.870264    5665 start.go:729] Will try again in 5 seconds ...
	I0923 17:29:13.872417    5665 start.go:360] acquireMachinesLock for kubenet-780000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:13.873035    5665 start.go:364] duration metric: took 480.916µs to acquireMachinesLock for "kubenet-780000"
	I0923 17:29:13.873206    5665 start.go:93] Provisioning new machine with config: &{Name:kubenet-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:29:13.873727    5665 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:29:13.886309    5665 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 17:29:13.937070    5665 start.go:159] libmachine.API.Create for "kubenet-780000" (driver="qemu2")
	I0923 17:29:13.937127    5665 client.go:168] LocalClient.Create starting
	I0923 17:29:13.937258    5665 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:29:13.937330    5665 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:13.937348    5665 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:13.937413    5665 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:29:13.937459    5665 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:13.937473    5665 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:13.938061    5665 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:29:14.109744    5665 main.go:141] libmachine: Creating SSH key...
	I0923 17:29:14.142205    5665 main.go:141] libmachine: Creating Disk image...
	I0923 17:29:14.142211    5665 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:29:14.142453    5665 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2
	I0923 17:29:14.151658    5665 main.go:141] libmachine: STDOUT: 
	I0923 17:29:14.151677    5665 main.go:141] libmachine: STDERR: 
	I0923 17:29:14.151735    5665 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2 +20000M
	I0923 17:29:14.159978    5665 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:29:14.159994    5665 main.go:141] libmachine: STDERR: 
	I0923 17:29:14.160008    5665 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2
	I0923 17:29:14.160014    5665 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:29:14.160025    5665 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:14.160066    5665 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:ca:ad:08:ae:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/kubenet-780000/disk.qcow2
	I0923 17:29:14.161897    5665 main.go:141] libmachine: STDOUT: 
	I0923 17:29:14.161908    5665 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:14.161919    5665 client.go:171] duration metric: took 224.787417ms to LocalClient.Create
	I0923 17:29:16.164108    5665 start.go:128] duration metric: took 2.290360875s to createHost
	I0923 17:29:16.164198    5665 start.go:83] releasing machines lock for "kubenet-780000", held for 2.291115958s
	W0923 17:29:16.164542    5665 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:16.173144    5665 out.go:201] 
	W0923 17:29:16.181214    5665 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:29:16.181267    5665 out.go:270] * 
	* 
	W0923 17:29:16.183290    5665 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:29:16.193279    5665 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-908000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-908000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.949451625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-908000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-908000" primary control-plane node in "old-k8s-version-908000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:29:18.401612    5778 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:29:18.401759    5778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:18.401764    5778 out.go:358] Setting ErrFile to fd 2...
	I0923 17:29:18.401766    5778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:18.401886    5778 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:29:18.402940    5778 out.go:352] Setting JSON to false
	I0923 17:29:18.420018    5778 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3521,"bootTime":1727134237,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:29:18.420093    5778 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:29:18.427120    5778 out.go:177] * [old-k8s-version-908000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:29:18.436006    5778 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:29:18.436043    5778 notify.go:220] Checking for updates...
	I0923 17:29:18.443950    5778 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:29:18.446982    5778 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:29:18.449961    5778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:29:18.453018    5778 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:29:18.455986    5778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:29:18.459331    5778 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:29:18.459401    5778 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:29:18.459455    5778 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:29:18.463926    5778 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:29:18.470942    5778 start.go:297] selected driver: qemu2
	I0923 17:29:18.470953    5778 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:29:18.470960    5778 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:29:18.473344    5778 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:29:18.475967    5778 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:29:18.479082    5778 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:29:18.479107    5778 cni.go:84] Creating CNI manager for ""
	I0923 17:29:18.479144    5778 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 17:29:18.479181    5778 start.go:340] cluster config:
	{Name:old-k8s-version-908000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:29:18.483112    5778 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:18.490965    5778 out.go:177] * Starting "old-k8s-version-908000" primary control-plane node in "old-k8s-version-908000" cluster
	I0923 17:29:18.495006    5778 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 17:29:18.495024    5778 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 17:29:18.495037    5778 cache.go:56] Caching tarball of preloaded images
	I0923 17:29:18.495107    5778 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:29:18.495113    5778 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 17:29:18.495181    5778 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/old-k8s-version-908000/config.json ...
	I0923 17:29:18.495195    5778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/old-k8s-version-908000/config.json: {Name:mk2276629b83542b652b2148f45433808f578541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:29:18.495446    5778 start.go:360] acquireMachinesLock for old-k8s-version-908000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:18.495483    5778 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "old-k8s-version-908000"
	I0923 17:29:18.495497    5778 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:29:18.495527    5778 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:29:18.503966    5778 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:29:18.523121    5778 start.go:159] libmachine.API.Create for "old-k8s-version-908000" (driver="qemu2")
	I0923 17:29:18.523161    5778 client.go:168] LocalClient.Create starting
	I0923 17:29:18.523234    5778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:29:18.523270    5778 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:18.523280    5778 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:18.523324    5778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:29:18.523348    5778 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:18.523355    5778 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:18.523823    5778 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:29:18.686818    5778 main.go:141] libmachine: Creating SSH key...
	I0923 17:29:18.799787    5778 main.go:141] libmachine: Creating Disk image...
	I0923 17:29:18.799794    5778 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:29:18.800019    5778 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2
	I0923 17:29:18.809327    5778 main.go:141] libmachine: STDOUT: 
	I0923 17:29:18.809347    5778 main.go:141] libmachine: STDERR: 
	I0923 17:29:18.809404    5778 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2 +20000M
	I0923 17:29:18.817694    5778 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:29:18.817710    5778 main.go:141] libmachine: STDERR: 
	I0923 17:29:18.817727    5778 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2
	I0923 17:29:18.817731    5778 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:29:18.817744    5778 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:18.817772    5778 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:dd:69:0c:80:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2
	I0923 17:29:18.819390    5778 main.go:141] libmachine: STDOUT: 
	I0923 17:29:18.819416    5778 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:18.819438    5778 client.go:171] duration metric: took 296.273125ms to LocalClient.Create
	I0923 17:29:20.821642    5778 start.go:128] duration metric: took 2.326101666s to createHost
	I0923 17:29:20.821723    5778 start.go:83] releasing machines lock for "old-k8s-version-908000", held for 2.326244333s
	W0923 17:29:20.821893    5778 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:20.832067    5778 out.go:177] * Deleting "old-k8s-version-908000" in qemu2 ...
	W0923 17:29:20.865932    5778 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:20.865957    5778 start.go:729] Will try again in 5 seconds ...
	I0923 17:29:25.868007    5778 start.go:360] acquireMachinesLock for old-k8s-version-908000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:25.868111    5778 start.go:364] duration metric: took 84.5µs to acquireMachinesLock for "old-k8s-version-908000"
	I0923 17:29:25.868131    5778 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:29:25.868189    5778 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:29:25.875579    5778 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:29:25.891169    5778 start.go:159] libmachine.API.Create for "old-k8s-version-908000" (driver="qemu2")
	I0923 17:29:25.891196    5778 client.go:168] LocalClient.Create starting
	I0923 17:29:25.891257    5778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:29:25.891294    5778 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:25.891303    5778 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:25.891335    5778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:29:25.891359    5778 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:25.891365    5778 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:25.891868    5778 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:29:26.161809    5778 main.go:141] libmachine: Creating SSH key...
	I0923 17:29:26.257549    5778 main.go:141] libmachine: Creating Disk image...
	I0923 17:29:26.257557    5778 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:29:26.257803    5778 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2
	I0923 17:29:26.267347    5778 main.go:141] libmachine: STDOUT: 
	I0923 17:29:26.267369    5778 main.go:141] libmachine: STDERR: 
	I0923 17:29:26.267442    5778 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2 +20000M
	I0923 17:29:26.275307    5778 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:29:26.275324    5778 main.go:141] libmachine: STDERR: 
	I0923 17:29:26.275336    5778 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2
	I0923 17:29:26.275341    5778 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:29:26.275358    5778 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:26.275382    5778 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e7:4d:d7:61:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2
	I0923 17:29:26.277017    5778 main.go:141] libmachine: STDOUT: 
	I0923 17:29:26.277030    5778 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:26.277044    5778 client.go:171] duration metric: took 385.8475ms to LocalClient.Create
	I0923 17:29:28.279213    5778 start.go:128] duration metric: took 2.411016916s to createHost
	I0923 17:29:28.279277    5778 start.go:83] releasing machines lock for "old-k8s-version-908000", held for 2.411171916s
	W0923 17:29:28.279670    5778 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:28.291243    5778 out.go:201] 
	W0923 17:29:28.295135    5778 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:29:28.295152    5778 out.go:270] * 
	* 
	W0923 17:29:28.296780    5778 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:29:28.309234    5778 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-908000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000: exit status 7 (56.576458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-908000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-908000 create -f testdata/busybox.yaml: exit status 1 (29.4935ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-908000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-908000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000: exit status 7 (29.88275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-908000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000: exit status 7 (29.808417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-908000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-908000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-908000 describe deploy/metrics-server -n kube-system: exit status 1 (27.045875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-908000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-908000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000: exit status 7 (30.647792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-908000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-908000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.190059667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-908000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-908000" primary control-plane node in "old-k8s-version-908000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:29:32.399546    5830 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:29:32.399691    5830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:32.399697    5830 out.go:358] Setting ErrFile to fd 2...
	I0923 17:29:32.399700    5830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:32.399839    5830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:29:32.400884    5830 out.go:352] Setting JSON to false
	I0923 17:29:32.417091    5830 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3535,"bootTime":1727134237,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:29:32.417163    5830 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:29:32.422321    5830 out.go:177] * [old-k8s-version-908000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:29:32.429339    5830 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:29:32.429384    5830 notify.go:220] Checking for updates...
	I0923 17:29:32.436256    5830 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:29:32.439275    5830 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:29:32.442395    5830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:29:32.445288    5830 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:29:32.448265    5830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:29:32.451559    5830 config.go:182] Loaded profile config "old-k8s-version-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 17:29:32.455234    5830 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 17:29:32.458300    5830 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:29:32.463276    5830 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:29:32.470254    5830 start.go:297] selected driver: qemu2
	I0923 17:29:32.470258    5830 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:29:32.470300    5830 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:29:32.472462    5830 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:29:32.472486    5830 cni.go:84] Creating CNI manager for ""
	I0923 17:29:32.472505    5830 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 17:29:32.472535    5830 start.go:340] cluster config:
	{Name:old-k8s-version-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-908000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:29:32.475902    5830 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:32.481299    5830 out.go:177] * Starting "old-k8s-version-908000" primary control-plane node in "old-k8s-version-908000" cluster
	I0923 17:29:32.486315    5830 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 17:29:32.486331    5830 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 17:29:32.486345    5830 cache.go:56] Caching tarball of preloaded images
	I0923 17:29:32.486412    5830 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:29:32.486417    5830 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 17:29:32.486461    5830 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/old-k8s-version-908000/config.json ...
	I0923 17:29:32.487014    5830 start.go:360] acquireMachinesLock for old-k8s-version-908000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:32.487041    5830 start.go:364] duration metric: took 20.834µs to acquireMachinesLock for "old-k8s-version-908000"
	I0923 17:29:32.487050    5830 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:29:32.487055    5830 fix.go:54] fixHost starting: 
	I0923 17:29:32.487160    5830 fix.go:112] recreateIfNeeded on old-k8s-version-908000: state=Stopped err=<nil>
	W0923 17:29:32.487169    5830 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:29:32.491265    5830 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-908000" ...
	I0923 17:29:32.499247    5830 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:32.499280    5830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e7:4d:d7:61:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2
	I0923 17:29:32.501191    5830 main.go:141] libmachine: STDOUT: 
	I0923 17:29:32.501215    5830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:32.501243    5830 fix.go:56] duration metric: took 14.186458ms for fixHost
	I0923 17:29:32.501248    5830 start.go:83] releasing machines lock for "old-k8s-version-908000", held for 14.203375ms
	W0923 17:29:32.501255    5830 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:29:32.501299    5830 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:32.501304    5830 start.go:729] Will try again in 5 seconds ...
	I0923 17:29:37.503513    5830 start.go:360] acquireMachinesLock for old-k8s-version-908000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:37.504067    5830 start.go:364] duration metric: took 439.583µs to acquireMachinesLock for "old-k8s-version-908000"
	I0923 17:29:37.504161    5830 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:29:37.504182    5830 fix.go:54] fixHost starting: 
	I0923 17:29:37.504932    5830 fix.go:112] recreateIfNeeded on old-k8s-version-908000: state=Stopped err=<nil>
	W0923 17:29:37.504958    5830 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:29:37.514736    5830 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-908000" ...
	I0923 17:29:37.518680    5830 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:37.518879    5830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e7:4d:d7:61:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/old-k8s-version-908000/disk.qcow2
	I0923 17:29:37.527010    5830 main.go:141] libmachine: STDOUT: 
	I0923 17:29:37.527059    5830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:37.527140    5830 fix.go:56] duration metric: took 22.958209ms for fixHost
	I0923 17:29:37.527159    5830 start.go:83] releasing machines lock for "old-k8s-version-908000", held for 23.068375ms
	W0923 17:29:37.527388    5830 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:37.534674    5830 out.go:201] 
	W0923 17:29:37.538741    5830 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:29:37.538761    5830 out.go:270] * 
	* 
	W0923 17:29:37.540558    5830 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:29:37.548717    5830 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-908000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000: exit status 7 (57.774541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-908000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000: exit status 7 (31.03875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-908000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-908000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-908000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.375417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-908000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-908000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000: exit status 7 (29.939166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-908000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000: exit status 7 (29.444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-908000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-908000 --alsologtostderr -v=1: exit status 83 (42.186666ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-908000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:29:37.813539    5849 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:29:37.814430    5849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:37.814433    5849 out.go:358] Setting ErrFile to fd 2...
	I0923 17:29:37.814436    5849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:37.814585    5849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:29:37.814804    5849 out.go:352] Setting JSON to false
	I0923 17:29:37.814813    5849 mustload.go:65] Loading cluster: old-k8s-version-908000
	I0923 17:29:37.815027    5849 config.go:182] Loaded profile config "old-k8s-version-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 17:29:37.818878    5849 out.go:177] * The control-plane node old-k8s-version-908000 host is not running: state=Stopped
	I0923 17:29:37.821778    5849 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-908000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-908000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000: exit status 7 (29.647833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-908000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000: exit status 7 (29.729834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.946726959s)

                                                
                                                
-- stdout --
	* [no-preload-117000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-117000" primary control-plane node in "no-preload-117000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-117000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:29:38.136014    5866 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:29:38.136135    5866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:38.136138    5866 out.go:358] Setting ErrFile to fd 2...
	I0923 17:29:38.136140    5866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:38.136279    5866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:29:38.137397    5866 out.go:352] Setting JSON to false
	I0923 17:29:38.153540    5866 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3541,"bootTime":1727134237,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:29:38.153607    5866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:29:38.157161    5866 out.go:177] * [no-preload-117000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:29:38.165187    5866 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:29:38.165262    5866 notify.go:220] Checking for updates...
	I0923 17:29:38.173051    5866 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:29:38.176124    5866 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:29:38.179094    5866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:29:38.182065    5866 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:29:38.185189    5866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:29:38.187075    5866 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:29:38.187132    5866 config.go:182] Loaded profile config "stopped-upgrade-180000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 17:29:38.187178    5866 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:29:38.192083    5866 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:29:38.198970    5866 start.go:297] selected driver: qemu2
	I0923 17:29:38.198975    5866 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:29:38.198981    5866 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:29:38.201263    5866 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:29:38.204102    5866 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:29:38.207160    5866 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:29:38.207176    5866 cni.go:84] Creating CNI manager for ""
	I0923 17:29:38.207196    5866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:29:38.207202    5866 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:29:38.207226    5866 start.go:340] cluster config:
	{Name:no-preload-117000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-117000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:29:38.210850    5866 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:38.218121    5866 out.go:177] * Starting "no-preload-117000" primary control-plane node in "no-preload-117000" cluster
	I0923 17:29:38.222101    5866 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:29:38.222163    5866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/no-preload-117000/config.json ...
	I0923 17:29:38.222176    5866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/no-preload-117000/config.json: {Name:mk629c9c4e7557ac307ec735deaa2d543338e07f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:29:38.222189    5866 cache.go:107] acquiring lock: {Name:mkd7e231fe1764ba47397126a95818f5a26960b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:38.222263    5866 cache.go:115] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0923 17:29:38.222278    5866 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 91.75µs
	I0923 17:29:38.222294    5866 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0923 17:29:38.222288    5866 cache.go:107] acquiring lock: {Name:mkc97b55bd85a28c344f2802b2db53b78ac77f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:38.222303    5866 cache.go:107] acquiring lock: {Name:mk00604bc5e06de31ad1c2a9cc9965da78321b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:38.222305    5866 cache.go:107] acquiring lock: {Name:mk87c14f50ef1e5aadcfbc6fcbf3b895e4b77b3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:38.222243    5866 cache.go:107] acquiring lock: {Name:mkda3968e76a2a2b3893f4c65917d30a83448f8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:38.222447    5866 cache.go:107] acquiring lock: {Name:mk52aae8724f905f6cf001005aa7eaf0157aa0dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:38.222457    5866 cache.go:107] acquiring lock: {Name:mk23e349f96a4274514152b716b44a3caedb4773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:38.222473    5866 cache.go:107] acquiring lock: {Name:mk64fd5dbcb4ead9532840c1a05b0cac5f3c25e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:38.222420    5866 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0923 17:29:38.222570    5866 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0923 17:29:38.222613    5866 start.go:360] acquireMachinesLock for no-preload-117000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:38.222648    5866 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0923 17:29:38.222651    5866 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "no-preload-117000"
	I0923 17:29:38.222657    5866 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0923 17:29:38.222663    5866 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 17:29:38.222684    5866 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0923 17:29:38.222665    5866 start.go:93] Provisioning new machine with config: &{Name:no-preload-117000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-117000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:29:38.222692    5866 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:29:38.222703    5866 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0923 17:29:38.227154    5866 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:29:38.233630    5866 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0923 17:29:38.233667    5866 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0923 17:29:38.233675    5866 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0923 17:29:38.233680    5866 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0923 17:29:38.233642    5866 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0923 17:29:38.233633    5866 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0923 17:29:38.235200    5866 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 17:29:38.243699    5866 start.go:159] libmachine.API.Create for "no-preload-117000" (driver="qemu2")
	I0923 17:29:38.243730    5866 client.go:168] LocalClient.Create starting
	I0923 17:29:38.243789    5866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:29:38.243817    5866 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:38.243826    5866 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:38.243874    5866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:29:38.243896    5866 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:38.243906    5866 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:38.244282    5866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:29:38.409816    5866 main.go:141] libmachine: Creating SSH key...
	I0923 17:29:38.550093    5866 main.go:141] libmachine: Creating Disk image...
	I0923 17:29:38.550117    5866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:29:38.550334    5866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2
	I0923 17:29:38.559792    5866 main.go:141] libmachine: STDOUT: 
	I0923 17:29:38.559812    5866 main.go:141] libmachine: STDERR: 
	I0923 17:29:38.559854    5866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2 +20000M
	I0923 17:29:38.568109    5866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:29:38.568125    5866 main.go:141] libmachine: STDERR: 
	I0923 17:29:38.568138    5866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2
	I0923 17:29:38.568144    5866 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:29:38.568157    5866 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:38.568183    5866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:35:6e:f6:30:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2
	I0923 17:29:38.569945    5866 main.go:141] libmachine: STDOUT: 
	I0923 17:29:38.569961    5866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:38.569980    5866 client.go:171] duration metric: took 326.246083ms to LocalClient.Create
	I0923 17:29:38.662844    5866 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0923 17:29:38.663768    5866 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0923 17:29:38.665980    5866 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0923 17:29:38.668960    5866 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0923 17:29:38.709747    5866 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0923 17:29:38.737192    5866 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0923 17:29:38.755771    5866 cache.go:162] opening:  /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0923 17:29:38.794995    5866 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0923 17:29:38.795006    5866 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 572.554208ms
	I0923 17:29:38.795014    5866 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0923 17:29:40.570157    5866 start.go:128] duration metric: took 2.347467583s to createHost
	I0923 17:29:40.570186    5866 start.go:83] releasing machines lock for "no-preload-117000", held for 2.347543958s
	W0923 17:29:40.570219    5866 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:40.580182    5866 out.go:177] * Deleting "no-preload-117000" in qemu2 ...
	W0923 17:29:40.605703    5866 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:40.605721    5866 start.go:729] Will try again in 5 seconds ...
	I0923 17:29:40.961348    5866 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0923 17:29:40.961395    5866 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 2.739184167s
	I0923 17:29:40.961414    5866 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0923 17:29:41.893445    5866 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0923 17:29:41.893459    5866 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.671079042s
	I0923 17:29:41.893467    5866 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0923 17:29:42.184396    5866 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0923 17:29:42.184416    5866 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.962139875s
	I0923 17:29:42.184425    5866 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0923 17:29:42.608367    5866 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0923 17:29:42.608387    5866 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.386129708s
	I0923 17:29:42.608410    5866 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0923 17:29:43.169776    5866 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0923 17:29:43.169822    5866 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.947423875s
	I0923 17:29:43.169846    5866 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0923 17:29:45.606151    5866 start.go:360] acquireMachinesLock for no-preload-117000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:45.606569    5866 start.go:364] duration metric: took 346.459µs to acquireMachinesLock for "no-preload-117000"
	I0923 17:29:45.606704    5866 start.go:93] Provisioning new machine with config: &{Name:no-preload-117000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-117000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:29:45.606921    5866 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:29:45.612937    5866 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:29:45.663892    5866 start.go:159] libmachine.API.Create for "no-preload-117000" (driver="qemu2")
	I0923 17:29:45.663940    5866 client.go:168] LocalClient.Create starting
	I0923 17:29:45.664067    5866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:29:45.664134    5866 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:45.664166    5866 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:45.664238    5866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:29:45.664286    5866 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:45.664303    5866 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:45.664803    5866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:29:45.835410    5866 main.go:141] libmachine: Creating SSH key...
	I0923 17:29:45.977319    5866 main.go:141] libmachine: Creating Disk image...
	I0923 17:29:45.977326    5866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:29:45.977593    5866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2
	I0923 17:29:45.987059    5866 main.go:141] libmachine: STDOUT: 
	I0923 17:29:45.987075    5866 main.go:141] libmachine: STDERR: 
	I0923 17:29:45.987137    5866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2 +20000M
	I0923 17:29:45.994973    5866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:29:45.994990    5866 main.go:141] libmachine: STDERR: 
	I0923 17:29:45.995002    5866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2
	I0923 17:29:45.995013    5866 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:29:45.995020    5866 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:45.995057    5866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:5a:bb:a8:f5:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2
	I0923 17:29:45.996760    5866 main.go:141] libmachine: STDOUT: 
	I0923 17:29:45.996775    5866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:45.996790    5866 client.go:171] duration metric: took 332.846292ms to LocalClient.Create
	I0923 17:29:47.355385    5866 cache.go:157] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0923 17:29:47.355435    5866 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 9.133197792s
	I0923 17:29:47.355460    5866 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0923 17:29:47.355528    5866 cache.go:87] Successfully saved all images to host disk.
	I0923 17:29:47.999001    5866 start.go:128] duration metric: took 2.39204575s to createHost
	I0923 17:29:47.999079    5866 start.go:83] releasing machines lock for "no-preload-117000", held for 2.392500625s
	W0923 17:29:47.999429    5866 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-117000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-117000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:48.007005    5866 out.go:201] 
	W0923 17:29:48.015172    5866 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:29:48.015200    5866 out.go:270] * 
	* 
	W0923 17:29:48.017978    5866 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:29:48.026992    5866 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (65.495125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-360000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-360000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.953315834s)

                                                
                                                
-- stdout --
	* [embed-certs-360000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-360000" primary control-plane node in "embed-certs-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:29:42.023825    5907 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:29:42.023978    5907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:42.023982    5907 out.go:358] Setting ErrFile to fd 2...
	I0923 17:29:42.023984    5907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:42.024127    5907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:29:42.025268    5907 out.go:352] Setting JSON to false
	I0923 17:29:42.041376    5907 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3545,"bootTime":1727134237,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:29:42.041448    5907 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:29:42.044512    5907 out.go:177] * [embed-certs-360000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:29:42.052571    5907 notify.go:220] Checking for updates...
	I0923 17:29:42.056416    5907 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:29:42.063469    5907 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:29:42.070408    5907 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:29:42.078832    5907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:29:42.086447    5907 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:29:42.093424    5907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:29:42.097634    5907 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:29:42.097701    5907 config.go:182] Loaded profile config "no-preload-117000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:29:42.097755    5907 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:29:42.100450    5907 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:29:42.107333    5907 start.go:297] selected driver: qemu2
	I0923 17:29:42.107338    5907 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:29:42.107343    5907 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:29:42.109613    5907 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:29:42.113443    5907 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:29:42.117554    5907 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:29:42.117579    5907 cni.go:84] Creating CNI manager for ""
	I0923 17:29:42.117605    5907 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:29:42.117613    5907 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:29:42.117641    5907 start.go:340] cluster config:
	{Name:embed-certs-360000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:29:42.121408    5907 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:42.129478    5907 out.go:177] * Starting "embed-certs-360000" primary control-plane node in "embed-certs-360000" cluster
	I0923 17:29:42.131089    5907 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:29:42.131109    5907 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:29:42.131118    5907 cache.go:56] Caching tarball of preloaded images
	I0923 17:29:42.131182    5907 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:29:42.131188    5907 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:29:42.131260    5907 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/embed-certs-360000/config.json ...
	I0923 17:29:42.131275    5907 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/embed-certs-360000/config.json: {Name:mk7802df41a9fd61a6db13bea996345886a19785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:29:42.131697    5907 start.go:360] acquireMachinesLock for embed-certs-360000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:42.131735    5907 start.go:364] duration metric: took 26.333µs to acquireMachinesLock for "embed-certs-360000"
	I0923 17:29:42.131750    5907 start.go:93] Provisioning new machine with config: &{Name:embed-certs-360000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:29:42.131784    5907 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:29:42.135457    5907 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:29:42.151485    5907 start.go:159] libmachine.API.Create for "embed-certs-360000" (driver="qemu2")
	I0923 17:29:42.151513    5907 client.go:168] LocalClient.Create starting
	I0923 17:29:42.151571    5907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:29:42.151599    5907 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:42.151609    5907 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:42.151652    5907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:29:42.151675    5907 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:42.151684    5907 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:42.151988    5907 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:29:42.324354    5907 main.go:141] libmachine: Creating SSH key...
	I0923 17:29:42.376856    5907 main.go:141] libmachine: Creating Disk image...
	I0923 17:29:42.376861    5907 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:29:42.377097    5907 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2
	I0923 17:29:42.386331    5907 main.go:141] libmachine: STDOUT: 
	I0923 17:29:42.386350    5907 main.go:141] libmachine: STDERR: 
	I0923 17:29:42.386401    5907 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2 +20000M
	I0923 17:29:42.394410    5907 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:29:42.394426    5907 main.go:141] libmachine: STDERR: 
	I0923 17:29:42.394453    5907 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2
	I0923 17:29:42.394459    5907 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:29:42.394471    5907 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:42.394499    5907 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:36:99:7d:a0:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2
	I0923 17:29:42.396104    5907 main.go:141] libmachine: STDOUT: 
	I0923 17:29:42.396117    5907 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:42.396138    5907 client.go:171] duration metric: took 244.61975ms to LocalClient.Create
	I0923 17:29:44.398300    5907 start.go:128] duration metric: took 2.266514167s to createHost
	I0923 17:29:44.398373    5907 start.go:83] releasing machines lock for "embed-certs-360000", held for 2.266641875s
	W0923 17:29:44.398432    5907 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:44.405775    5907 out.go:177] * Deleting "embed-certs-360000" in qemu2 ...
	W0923 17:29:44.438024    5907 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:44.438046    5907 start.go:729] Will try again in 5 seconds ...
	I0923 17:29:49.440317    5907 start.go:360] acquireMachinesLock for embed-certs-360000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:49.440875    5907 start.go:364] duration metric: took 412.416µs to acquireMachinesLock for "embed-certs-360000"
	I0923 17:29:49.441023    5907 start.go:93] Provisioning new machine with config: &{Name:embed-certs-360000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:29:49.441337    5907 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:29:49.447087    5907 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:29:49.497770    5907 start.go:159] libmachine.API.Create for "embed-certs-360000" (driver="qemu2")
	I0923 17:29:49.497828    5907 client.go:168] LocalClient.Create starting
	I0923 17:29:49.497918    5907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:29:49.497968    5907 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:49.497985    5907 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:49.498064    5907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:29:49.498101    5907 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:49.498113    5907 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:49.498776    5907 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:29:49.735421    5907 main.go:141] libmachine: Creating SSH key...
	I0923 17:29:49.875229    5907 main.go:141] libmachine: Creating Disk image...
	I0923 17:29:49.875238    5907 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:29:49.875491    5907 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2
	I0923 17:29:49.884974    5907 main.go:141] libmachine: STDOUT: 
	I0923 17:29:49.884991    5907 main.go:141] libmachine: STDERR: 
	I0923 17:29:49.885046    5907 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2 +20000M
	I0923 17:29:49.892805    5907 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:29:49.892820    5907 main.go:141] libmachine: STDERR: 
	I0923 17:29:49.892833    5907 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2
	I0923 17:29:49.892838    5907 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:29:49.892848    5907 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:49.892874    5907 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d7:95:0c:59:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2
	I0923 17:29:49.894473    5907 main.go:141] libmachine: STDOUT: 
	I0923 17:29:49.894492    5907 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:49.894504    5907 client.go:171] duration metric: took 396.674458ms to LocalClient.Create
	I0923 17:29:51.896689    5907 start.go:128] duration metric: took 2.455338208s to createHost
	I0923 17:29:51.896772    5907 start.go:83] releasing machines lock for "embed-certs-360000", held for 2.455886291s
	W0923 17:29:51.897168    5907 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:51.905795    5907 out.go:201] 
	W0923 17:29:51.916847    5907 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:29:51.916896    5907 out.go:270] * 
	* 
	W0923 17:29:51.919544    5907 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:29:51.931860    5907 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-360000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000: exit status 7 (64.719125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-117000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-117000 create -f testdata/busybox.yaml: exit status 1 (30.483542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-117000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-117000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (29.729667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (29.411334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-117000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-117000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-117000 describe deploy/metrics-server -n kube-system: exit status 1 (26.733625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-117000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-117000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (29.454041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.632456458s)

                                                
                                                
-- stdout --
	* [no-preload-117000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-117000" primary control-plane node in "no-preload-117000" cluster
	* Restarting existing qemu2 VM for "no-preload-117000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-117000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:29:50.388595    5954 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:29:50.388740    5954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:50.388743    5954 out.go:358] Setting ErrFile to fd 2...
	I0923 17:29:50.388745    5954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:50.388876    5954 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:29:50.390055    5954 out.go:352] Setting JSON to false
	I0923 17:29:50.405992    5954 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3553,"bootTime":1727134237,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:29:50.406055    5954 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:29:50.411077    5954 out.go:177] * [no-preload-117000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:29:50.418080    5954 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:29:50.418131    5954 notify.go:220] Checking for updates...
	I0923 17:29:50.425020    5954 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:29:50.427984    5954 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:29:50.431033    5954 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:29:50.434038    5954 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:29:50.437018    5954 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:29:50.440310    5954 config.go:182] Loaded profile config "no-preload-117000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:29:50.440586    5954 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:29:50.443982    5954 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:29:50.451037    5954 start.go:297] selected driver: qemu2
	I0923 17:29:50.451044    5954 start.go:901] validating driver "qemu2" against &{Name:no-preload-117000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-117000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:29:50.451110    5954 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:29:50.453293    5954 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:29:50.453318    5954 cni.go:84] Creating CNI manager for ""
	I0923 17:29:50.453341    5954 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:29:50.453365    5954 start.go:340] cluster config:
	{Name:no-preload-117000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-117000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:29:50.457014    5954 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:50.464032    5954 out.go:177] * Starting "no-preload-117000" primary control-plane node in "no-preload-117000" cluster
	I0923 17:29:50.468046    5954 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:29:50.468128    5954 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/no-preload-117000/config.json ...
	I0923 17:29:50.468150    5954 cache.go:107] acquiring lock: {Name:mkd7e231fe1764ba47397126a95818f5a26960b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:50.468167    5954 cache.go:107] acquiring lock: {Name:mkc97b55bd85a28c344f2802b2db53b78ac77f9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:50.468190    5954 cache.go:107] acquiring lock: {Name:mk52aae8724f905f6cf001005aa7eaf0157aa0dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:50.468223    5954 cache.go:115] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0923 17:29:50.468228    5954 cache.go:115] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0923 17:29:50.468230    5954 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 83.25µs
	I0923 17:29:50.468233    5954 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 79.25µs
	I0923 17:29:50.468236    5954 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0923 17:29:50.468237    5954 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0923 17:29:50.468241    5954 cache.go:107] acquiring lock: {Name:mk00604bc5e06de31ad1c2a9cc9965da78321b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:50.468250    5954 cache.go:107] acquiring lock: {Name:mkda3968e76a2a2b3893f4c65917d30a83448f8e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:50.468260    5954 cache.go:115] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0923 17:29:50.468268    5954 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 103.792µs
	I0923 17:29:50.468273    5954 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0923 17:29:50.468279    5954 cache.go:115] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0923 17:29:50.468285    5954 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 44.875µs
	I0923 17:29:50.468289    5954 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0923 17:29:50.468295    5954 cache.go:115] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0923 17:29:50.468302    5954 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 52.916µs
	I0923 17:29:50.468305    5954 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0923 17:29:50.468302    5954 cache.go:107] acquiring lock: {Name:mk64fd5dbcb4ead9532840c1a05b0cac5f3c25e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:50.468300    5954 cache.go:107] acquiring lock: {Name:mk23e349f96a4274514152b716b44a3caedb4773 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:50.468332    5954 cache.go:107] acquiring lock: {Name:mk87c14f50ef1e5aadcfbc6fcbf3b895e4b77b3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:50.468363    5954 cache.go:115] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0923 17:29:50.468368    5954 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 68.292µs
	I0923 17:29:50.468377    5954 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0923 17:29:50.468388    5954 cache.go:115] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0923 17:29:50.468392    5954 cache.go:115] /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0923 17:29:50.468392    5954 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 93.5µs
	I0923 17:29:50.468396    5954 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0923 17:29:50.468398    5954 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 120.625µs
	I0923 17:29:50.468401    5954 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0923 17:29:50.468405    5954 cache.go:87] Successfully saved all images to host disk.
	I0923 17:29:50.468548    5954 start.go:360] acquireMachinesLock for no-preload-117000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:51.896918    5954 start.go:364] duration metric: took 1.428359541s to acquireMachinesLock for "no-preload-117000"
	I0923 17:29:51.897108    5954 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:29:51.897129    5954 fix.go:54] fixHost starting: 
	I0923 17:29:51.897837    5954 fix.go:112] recreateIfNeeded on no-preload-117000: state=Stopped err=<nil>
	W0923 17:29:51.897899    5954 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:29:51.913858    5954 out.go:177] * Restarting existing qemu2 VM for "no-preload-117000" ...
	I0923 17:29:51.919835    5954 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:51.920011    5954 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:5a:bb:a8:f5:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2
	I0923 17:29:51.929730    5954 main.go:141] libmachine: STDOUT: 
	I0923 17:29:51.929798    5954 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:51.929919    5954 fix.go:56] duration metric: took 32.777667ms for fixHost
	I0923 17:29:51.929939    5954 start.go:83] releasing machines lock for "no-preload-117000", held for 32.990042ms
	W0923 17:29:51.929971    5954 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:29:51.930128    5954 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:51.930146    5954 start.go:729] Will try again in 5 seconds ...
	I0923 17:29:56.932413    5954 start.go:360] acquireMachinesLock for no-preload-117000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:56.932826    5954 start.go:364] duration metric: took 311.125µs to acquireMachinesLock for "no-preload-117000"
	I0923 17:29:56.932960    5954 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:29:56.932980    5954 fix.go:54] fixHost starting: 
	I0923 17:29:56.933647    5954 fix.go:112] recreateIfNeeded on no-preload-117000: state=Stopped err=<nil>
	W0923 17:29:56.933674    5954 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:29:56.939125    5954 out.go:177] * Restarting existing qemu2 VM for "no-preload-117000" ...
	I0923 17:29:56.947126    5954 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:56.947490    5954 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:5a:bb:a8:f5:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/no-preload-117000/disk.qcow2
	I0923 17:29:56.956456    5954 main.go:141] libmachine: STDOUT: 
	I0923 17:29:56.956538    5954 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:56.956642    5954 fix.go:56] duration metric: took 23.657417ms for fixHost
	I0923 17:29:56.956668    5954 start.go:83] releasing machines lock for "no-preload-117000", held for 23.821041ms
	W0923 17:29:56.956984    5954 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-117000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-117000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:56.965107    5954 out.go:201] 
	W0923 17:29:56.969057    5954 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:29:56.969091    5954 out.go:270] * 
	* 
	W0923 17:29:56.971879    5954 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:29:56.979027    5954 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-117000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (66.416625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-360000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-360000 create -f testdata/busybox.yaml: exit status 1 (29.025625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-360000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-360000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000: exit status 7 (29.5145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-360000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000: exit status 7 (29.04ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-360000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-360000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-360000 describe deploy/metrics-server -n kube-system: exit status 1 (26.605291ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-360000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-360000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000: exit status 7 (29.014041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-360000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-360000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.199370083s)

                                                
                                                
-- stdout --
	* [embed-certs-360000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-360000" primary control-plane node in "embed-certs-360000" cluster
	* Restarting existing qemu2 VM for "embed-certs-360000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-360000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:29:55.634351    5998 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:29:55.634492    5998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:55.634496    5998 out.go:358] Setting ErrFile to fd 2...
	I0923 17:29:55.634498    5998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:55.634625    5998 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:29:55.635637    5998 out.go:352] Setting JSON to false
	I0923 17:29:55.651424    5998 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3558,"bootTime":1727134237,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:29:55.651489    5998 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:29:55.656109    5998 out.go:177] * [embed-certs-360000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:29:55.663084    5998 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:29:55.663160    5998 notify.go:220] Checking for updates...
	I0923 17:29:55.670938    5998 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:29:55.674060    5998 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:29:55.677082    5998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:29:55.680105    5998 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:29:55.683036    5998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:29:55.686323    5998 config.go:182] Loaded profile config "embed-certs-360000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:29:55.686572    5998 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:29:55.691044    5998 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:29:55.698025    5998 start.go:297] selected driver: qemu2
	I0923 17:29:55.698030    5998 start.go:901] validating driver "qemu2" against &{Name:embed-certs-360000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:29:55.698072    5998 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:29:55.700101    5998 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:29:55.700123    5998 cni.go:84] Creating CNI manager for ""
	I0923 17:29:55.700142    5998 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:29:55.700164    5998 start.go:340] cluster config:
	{Name:embed-certs-360000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-360000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:29:55.703460    5998 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:55.711070    5998 out.go:177] * Starting "embed-certs-360000" primary control-plane node in "embed-certs-360000" cluster
	I0923 17:29:55.715116    5998 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:29:55.715131    5998 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:29:55.715141    5998 cache.go:56] Caching tarball of preloaded images
	I0923 17:29:55.715212    5998 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:29:55.715217    5998 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:29:55.715303    5998 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/embed-certs-360000/config.json ...
	I0923 17:29:55.715842    5998 start.go:360] acquireMachinesLock for embed-certs-360000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:55.715869    5998 start.go:364] duration metric: took 21.458µs to acquireMachinesLock for "embed-certs-360000"
	I0923 17:29:55.715879    5998 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:29:55.715884    5998 fix.go:54] fixHost starting: 
	I0923 17:29:55.716003    5998 fix.go:112] recreateIfNeeded on embed-certs-360000: state=Stopped err=<nil>
	W0923 17:29:55.716011    5998 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:29:55.724079    5998 out.go:177] * Restarting existing qemu2 VM for "embed-certs-360000" ...
	I0923 17:29:55.727034    5998 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:55.727068    5998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d7:95:0c:59:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2
	I0923 17:29:55.728932    5998 main.go:141] libmachine: STDOUT: 
	I0923 17:29:55.728950    5998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:55.728986    5998 fix.go:56] duration metric: took 13.102084ms for fixHost
	I0923 17:29:55.728991    5998 start.go:83] releasing machines lock for "embed-certs-360000", held for 13.117292ms
	W0923 17:29:55.728998    5998 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:29:55.729030    5998 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:29:55.729034    5998 start.go:729] Will try again in 5 seconds ...
	I0923 17:30:00.731216    5998 start.go:360] acquireMachinesLock for embed-certs-360000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:30:00.731690    5998 start.go:364] duration metric: took 354.917µs to acquireMachinesLock for "embed-certs-360000"
	I0923 17:30:00.731815    5998 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:30:00.731837    5998 fix.go:54] fixHost starting: 
	I0923 17:30:00.732604    5998 fix.go:112] recreateIfNeeded on embed-certs-360000: state=Stopped err=<nil>
	W0923 17:30:00.732633    5998 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:30:00.755161    5998 out.go:177] * Restarting existing qemu2 VM for "embed-certs-360000" ...
	I0923 17:30:00.758772    5998 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:30:00.758978    5998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d7:95:0c:59:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/embed-certs-360000/disk.qcow2
	I0923 17:30:00.768525    5998 main.go:141] libmachine: STDOUT: 
	I0923 17:30:00.768582    5998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:30:00.768668    5998 fix.go:56] duration metric: took 36.83625ms for fixHost
	I0923 17:30:00.768686    5998 start.go:83] releasing machines lock for "embed-certs-360000", held for 36.971584ms
	W0923 17:30:00.768873    5998 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-360000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-360000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:30:00.778966    5998 out.go:201] 
	W0923 17:30:00.780596    5998 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:30:00.780621    5998 out.go:270] * 
	* 
	W0923 17:30:00.783138    5998 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:30:00.792898    5998 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-360000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000: exit status 7 (65.960458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-117000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (31.954291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-117000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-117000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-117000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.918042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-117000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-117000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (29.68725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-117000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (29.62525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-117000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-117000 --alsologtostderr -v=1: exit status 83 (40.823666ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-117000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-117000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:29:57.249272    6018 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:29:57.249417    6018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:57.249421    6018 out.go:358] Setting ErrFile to fd 2...
	I0923 17:29:57.249423    6018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:57.249553    6018 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:29:57.249779    6018 out.go:352] Setting JSON to false
	I0923 17:29:57.249789    6018 mustload.go:65] Loading cluster: no-preload-117000
	I0923 17:29:57.250015    6018 config.go:182] Loaded profile config "no-preload-117000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:29:57.254475    6018 out.go:177] * The control-plane node no-preload-117000 host is not running: state=Stopped
	I0923 17:29:57.257521    6018 out.go:177]   To start a cluster, run: "minikube start -p no-preload-117000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-117000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (29.06ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (29.170625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-117000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-534000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-534000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.870998875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-534000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-534000" primary control-plane node in "default-k8s-diff-port-534000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-534000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:29:57.688883    6042 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:29:57.689019    6042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:57.689022    6042 out.go:358] Setting ErrFile to fd 2...
	I0923 17:29:57.689025    6042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:29:57.689189    6042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:29:57.690313    6042 out.go:352] Setting JSON to false
	I0923 17:29:57.706097    6042 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3560,"bootTime":1727134237,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:29:57.706177    6042 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:29:57.711535    6042 out.go:177] * [default-k8s-diff-port-534000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:29:57.718466    6042 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:29:57.718506    6042 notify.go:220] Checking for updates...
	I0923 17:29:57.725460    6042 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:29:57.728463    6042 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:29:57.731486    6042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:29:57.734387    6042 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:29:57.737467    6042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:29:57.740872    6042 config.go:182] Loaded profile config "embed-certs-360000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:29:57.740934    6042 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:29:57.740976    6042 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:29:57.745433    6042 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:29:57.752522    6042 start.go:297] selected driver: qemu2
	I0923 17:29:57.752528    6042 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:29:57.752535    6042 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:29:57.754828    6042 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 17:29:57.757432    6042 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:29:57.760532    6042 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:29:57.760556    6042 cni.go:84] Creating CNI manager for ""
	I0923 17:29:57.760588    6042 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:29:57.760599    6042 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:29:57.760631    6042 start.go:340] cluster config:
	{Name:default-k8s-diff-port-534000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:29:57.764338    6042 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:29:57.771436    6042 out.go:177] * Starting "default-k8s-diff-port-534000" primary control-plane node in "default-k8s-diff-port-534000" cluster
	I0923 17:29:57.775433    6042 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:29:57.775448    6042 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:29:57.775455    6042 cache.go:56] Caching tarball of preloaded images
	I0923 17:29:57.775511    6042 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:29:57.775519    6042 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:29:57.775575    6042 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/default-k8s-diff-port-534000/config.json ...
	I0923 17:29:57.775586    6042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/default-k8s-diff-port-534000/config.json: {Name:mke4b9d46426948ee8dd63ffbef0d47ee7def24a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:29:57.775803    6042 start.go:360] acquireMachinesLock for default-k8s-diff-port-534000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:29:57.775839    6042 start.go:364] duration metric: took 27.791µs to acquireMachinesLock for "default-k8s-diff-port-534000"
	I0923 17:29:57.775853    6042 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:29:57.775885    6042 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:29:57.783492    6042 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:29:57.801225    6042 start.go:159] libmachine.API.Create for "default-k8s-diff-port-534000" (driver="qemu2")
	I0923 17:29:57.801261    6042 client.go:168] LocalClient.Create starting
	I0923 17:29:57.801322    6042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:29:57.801359    6042 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:57.801370    6042 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:57.801405    6042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:29:57.801428    6042 main.go:141] libmachine: Decoding PEM data...
	I0923 17:29:57.801435    6042 main.go:141] libmachine: Parsing certificate...
	I0923 17:29:57.801916    6042 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:29:57.966698    6042 main.go:141] libmachine: Creating SSH key...
	I0923 17:29:58.091314    6042 main.go:141] libmachine: Creating Disk image...
	I0923 17:29:58.091321    6042 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:29:58.091550    6042 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2
	I0923 17:29:58.101037    6042 main.go:141] libmachine: STDOUT: 
	I0923 17:29:58.101056    6042 main.go:141] libmachine: STDERR: 
	I0923 17:29:58.101122    6042 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2 +20000M
	I0923 17:29:58.109115    6042 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:29:58.109131    6042 main.go:141] libmachine: STDERR: 
	I0923 17:29:58.109150    6042 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2
	I0923 17:29:58.109156    6042 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:29:58.109169    6042 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:29:58.109194    6042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:b1:53:b0:d5:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2
	I0923 17:29:58.110769    6042 main.go:141] libmachine: STDOUT: 
	I0923 17:29:58.110785    6042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:29:58.110806    6042 client.go:171] duration metric: took 309.54175ms to LocalClient.Create
	I0923 17:30:00.112975    6042 start.go:128] duration metric: took 2.337086333s to createHost
	I0923 17:30:00.113025    6042 start.go:83] releasing machines lock for "default-k8s-diff-port-534000", held for 2.337193708s
	W0923 17:30:00.113091    6042 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:30:00.128403    6042 out.go:177] * Deleting "default-k8s-diff-port-534000" in qemu2 ...
	W0923 17:30:00.161612    6042 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:30:00.161635    6042 start.go:729] Will try again in 5 seconds ...
	I0923 17:30:05.163760    6042 start.go:360] acquireMachinesLock for default-k8s-diff-port-534000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:30:05.164078    6042 start.go:364] duration metric: took 246.333µs to acquireMachinesLock for "default-k8s-diff-port-534000"
	I0923 17:30:05.164173    6042 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:30:05.164369    6042 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:30:05.172834    6042 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:30:05.215848    6042 start.go:159] libmachine.API.Create for "default-k8s-diff-port-534000" (driver="qemu2")
	I0923 17:30:05.215903    6042 client.go:168] LocalClient.Create starting
	I0923 17:30:05.216018    6042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:30:05.216083    6042 main.go:141] libmachine: Decoding PEM data...
	I0923 17:30:05.216122    6042 main.go:141] libmachine: Parsing certificate...
	I0923 17:30:05.216179    6042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:30:05.216223    6042 main.go:141] libmachine: Decoding PEM data...
	I0923 17:30:05.216238    6042 main.go:141] libmachine: Parsing certificate...
	I0923 17:30:05.216680    6042 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:30:05.388805    6042 main.go:141] libmachine: Creating SSH key...
	I0923 17:30:05.462615    6042 main.go:141] libmachine: Creating Disk image...
	I0923 17:30:05.462621    6042 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:30:05.462825    6042 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2
	I0923 17:30:05.472003    6042 main.go:141] libmachine: STDOUT: 
	I0923 17:30:05.472025    6042 main.go:141] libmachine: STDERR: 
	I0923 17:30:05.472088    6042 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2 +20000M
	I0923 17:30:05.479984    6042 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:30:05.479998    6042 main.go:141] libmachine: STDERR: 
	I0923 17:30:05.480014    6042 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2
	I0923 17:30:05.480022    6042 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:30:05.480031    6042 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:30:05.480060    6042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:03:27:28:a7:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2
	I0923 17:30:05.481663    6042 main.go:141] libmachine: STDOUT: 
	I0923 17:30:05.481677    6042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:30:05.481691    6042 client.go:171] duration metric: took 265.785125ms to LocalClient.Create
	I0923 17:30:07.483846    6042 start.go:128] duration metric: took 2.319466458s to createHost
	I0923 17:30:07.483899    6042 start.go:83] releasing machines lock for "default-k8s-diff-port-534000", held for 2.319819833s
	W0923 17:30:07.484433    6042 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:30:07.494024    6042 out.go:201] 
	W0923 17:30:07.504193    6042 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:30:07.504221    6042 out.go:270] * 
	* 
	W0923 17:30:07.506853    6042 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:30:07.517022    6042 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-534000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000: exit status 7 (65.474625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-360000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000: exit status 7 (32.850083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-360000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-360000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-360000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.457917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-360000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-360000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000: exit status 7 (27.522583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-360000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000: exit status 7 (28.767125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-360000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-360000 --alsologtostderr -v=1: exit status 83 (40.100958ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-360000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-360000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:30:01.056833    6111 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:30:01.056968    6111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:01.056971    6111 out.go:358] Setting ErrFile to fd 2...
	I0923 17:30:01.056974    6111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:01.057119    6111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:30:01.057330    6111 out.go:352] Setting JSON to false
	I0923 17:30:01.057339    6111 mustload.go:65] Loading cluster: embed-certs-360000
	I0923 17:30:01.057564    6111 config.go:182] Loaded profile config "embed-certs-360000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:30:01.061027    6111 out.go:177] * The control-plane node embed-certs-360000 host is not running: state=Stopped
	I0923 17:30:01.064867    6111 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-360000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-360000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000: exit status 7 (28.180833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-360000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000: exit status 7 (28.431167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-872000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-872000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.85465875s)

                                                
                                                
-- stdout --
	* [newest-cni-872000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-872000" primary control-plane node in "newest-cni-872000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-872000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:30:01.372794    6218 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:30:01.372984    6218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:01.372987    6218 out.go:358] Setting ErrFile to fd 2...
	I0923 17:30:01.372990    6218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:01.373130    6218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:30:01.374219    6218 out.go:352] Setting JSON to false
	I0923 17:30:01.390208    6218 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3564,"bootTime":1727134237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:30:01.390278    6218 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:30:01.394910    6218 out.go:177] * [newest-cni-872000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:30:01.401867    6218 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:30:01.401930    6218 notify.go:220] Checking for updates...
	I0923 17:30:01.408931    6218 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:30:01.411897    6218 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:30:01.414912    6218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:30:01.418046    6218 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:30:01.420872    6218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:30:01.424174    6218 config.go:182] Loaded profile config "default-k8s-diff-port-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:30:01.424236    6218 config.go:182] Loaded profile config "multinode-317000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:30:01.424285    6218 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:30:01.427896    6218 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 17:30:01.439885    6218 start.go:297] selected driver: qemu2
	I0923 17:30:01.439898    6218 start.go:901] validating driver "qemu2" against <nil>
	I0923 17:30:01.439908    6218 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:30:01.442251    6218 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0923 17:30:01.442292    6218 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0923 17:30:01.449760    6218 out.go:177] * Automatically selected the socket_vmnet network
	I0923 17:30:01.452994    6218 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0923 17:30:01.453013    6218 cni.go:84] Creating CNI manager for ""
	I0923 17:30:01.453037    6218 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:30:01.453043    6218 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 17:30:01.453067    6218 start.go:340] cluster config:
	{Name:newest-cni-872000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-872000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:30:01.456692    6218 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:30:01.463809    6218 out.go:177] * Starting "newest-cni-872000" primary control-plane node in "newest-cni-872000" cluster
	I0923 17:30:01.467825    6218 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:30:01.467838    6218 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:30:01.467845    6218 cache.go:56] Caching tarball of preloaded images
	I0923 17:30:01.467906    6218 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:30:01.467912    6218 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:30:01.467983    6218 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/newest-cni-872000/config.json ...
	I0923 17:30:01.467994    6218 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/newest-cni-872000/config.json: {Name:mk4d78c8ef270b5dfbe11bd3e9bdc84b507d2fbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 17:30:01.468325    6218 start.go:360] acquireMachinesLock for newest-cni-872000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:30:01.468361    6218 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "newest-cni-872000"
	I0923 17:30:01.468375    6218 start.go:93] Provisioning new machine with config: &{Name:newest-cni-872000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-872000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:30:01.468402    6218 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:30:01.476915    6218 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:30:01.495306    6218 start.go:159] libmachine.API.Create for "newest-cni-872000" (driver="qemu2")
	I0923 17:30:01.495337    6218 client.go:168] LocalClient.Create starting
	I0923 17:30:01.495410    6218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:30:01.495441    6218 main.go:141] libmachine: Decoding PEM data...
	I0923 17:30:01.495451    6218 main.go:141] libmachine: Parsing certificate...
	I0923 17:30:01.495491    6218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:30:01.495519    6218 main.go:141] libmachine: Decoding PEM data...
	I0923 17:30:01.495527    6218 main.go:141] libmachine: Parsing certificate...
	I0923 17:30:01.495963    6218 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:30:01.655030    6218 main.go:141] libmachine: Creating SSH key...
	I0923 17:30:01.709466    6218 main.go:141] libmachine: Creating Disk image...
	I0923 17:30:01.709471    6218 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:30:01.709661    6218 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2
	I0923 17:30:01.718714    6218 main.go:141] libmachine: STDOUT: 
	I0923 17:30:01.718747    6218 main.go:141] libmachine: STDERR: 
	I0923 17:30:01.718818    6218 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2 +20000M
	I0923 17:30:01.726529    6218 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:30:01.726543    6218 main.go:141] libmachine: STDERR: 
	I0923 17:30:01.726555    6218 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2
	I0923 17:30:01.726560    6218 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:30:01.726571    6218 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:30:01.726600    6218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:b1:f5:55:e2:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2
	I0923 17:30:01.728060    6218 main.go:141] libmachine: STDOUT: 
	I0923 17:30:01.728072    6218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:30:01.728093    6218 client.go:171] duration metric: took 232.750375ms to LocalClient.Create
	I0923 17:30:03.730232    6218 start.go:128] duration metric: took 2.261830375s to createHost
	I0923 17:30:03.730282    6218 start.go:83] releasing machines lock for "newest-cni-872000", held for 2.261929458s
	W0923 17:30:03.730327    6218 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:30:03.738699    6218 out.go:177] * Deleting "newest-cni-872000" in qemu2 ...
	W0923 17:30:03.769289    6218 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:30:03.769323    6218 start.go:729] Will try again in 5 seconds ...
	I0923 17:30:08.771446    6218 start.go:360] acquireMachinesLock for newest-cni-872000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:30:08.771884    6218 start.go:364] duration metric: took 360.375µs to acquireMachinesLock for "newest-cni-872000"
	I0923 17:30:08.772066    6218 start.go:93] Provisioning new machine with config: &{Name:newest-cni-872000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-872000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 17:30:08.772362    6218 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 17:30:08.777104    6218 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 17:30:08.826673    6218 start.go:159] libmachine.API.Create for "newest-cni-872000" (driver="qemu2")
	I0923 17:30:08.826736    6218 client.go:168] LocalClient.Create starting
	I0923 17:30:08.826831    6218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/ca.pem
	I0923 17:30:08.826885    6218 main.go:141] libmachine: Decoding PEM data...
	I0923 17:30:08.826905    6218 main.go:141] libmachine: Parsing certificate...
	I0923 17:30:08.826971    6218 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19696-1109/.minikube/certs/cert.pem
	I0923 17:30:08.827015    6218 main.go:141] libmachine: Decoding PEM data...
	I0923 17:30:08.827026    6218 main.go:141] libmachine: Parsing certificate...
	I0923 17:30:08.827590    6218 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0923 17:30:09.051152    6218 main.go:141] libmachine: Creating SSH key...
	I0923 17:30:09.133005    6218 main.go:141] libmachine: Creating Disk image...
	I0923 17:30:09.133012    6218 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 17:30:09.133244    6218 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2.raw /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2
	I0923 17:30:09.142991    6218 main.go:141] libmachine: STDOUT: 
	I0923 17:30:09.143009    6218 main.go:141] libmachine: STDERR: 
	I0923 17:30:09.143066    6218 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2 +20000M
	I0923 17:30:09.150907    6218 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 17:30:09.150926    6218 main.go:141] libmachine: STDERR: 
	I0923 17:30:09.150937    6218 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2
	I0923 17:30:09.150943    6218 main.go:141] libmachine: Starting QEMU VM...
	I0923 17:30:09.150957    6218 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:30:09.150984    6218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:bb:f3:2f:5b:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2
	I0923 17:30:09.152629    6218 main.go:141] libmachine: STDOUT: 
	I0923 17:30:09.152643    6218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:30:09.152655    6218 client.go:171] duration metric: took 325.915625ms to LocalClient.Create
	I0923 17:30:11.154843    6218 start.go:128] duration metric: took 2.382463833s to createHost
	I0923 17:30:11.154937    6218 start.go:83] releasing machines lock for "newest-cni-872000", held for 2.383024167s
	W0923 17:30:11.155388    6218 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-872000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-872000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:30:11.173126    6218 out.go:201] 
	W0923 17:30:11.177180    6218 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:30:11.177207    6218 out.go:270] * 
	* 
	W0923 17:30:11.180001    6218 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:30:11.187049    6218 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-872000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000: exit status 7 (65.633125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-534000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-534000 create -f testdata/busybox.yaml: exit status 1 (29.4535ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-534000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-534000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000: exit status 7 (29.416958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-534000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000: exit status 7 (29.515625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-534000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-534000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-534000 describe deploy/metrics-server -n kube-system: exit status 1 (26.885833ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-534000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-534000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000: exit status 7 (29.254125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-534000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-534000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.191653542s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-534000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-534000" primary control-plane node in "default-k8s-diff-port-534000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-534000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-534000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:30:11.964879    6467 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:30:11.965015    6467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:11.965019    6467 out.go:358] Setting ErrFile to fd 2...
	I0923 17:30:11.965021    6467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:11.965135    6467 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:30:11.966107    6467 out.go:352] Setting JSON to false
	I0923 17:30:11.982024    6467 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3574,"bootTime":1727134237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:30:11.982087    6467 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:30:11.987250    6467 out.go:177] * [default-k8s-diff-port-534000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:30:11.994233    6467 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:30:11.994290    6467 notify.go:220] Checking for updates...
	I0923 17:30:12.002136    6467 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:30:12.005163    6467 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:30:12.008206    6467 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:30:12.011173    6467 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:30:12.018182    6467 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:30:12.022470    6467 config.go:182] Loaded profile config "default-k8s-diff-port-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:30:12.022757    6467 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:30:12.027164    6467 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:30:12.034137    6467 start.go:297] selected driver: qemu2
	I0923 17:30:12.034143    6467 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:30:12.034194    6467 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:30:12.036490    6467 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 17:30:12.036518    6467 cni.go:84] Creating CNI manager for ""
	I0923 17:30:12.036539    6467 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:30:12.036562    6467 start.go:340] cluster config:
	{Name:default-k8s-diff-port-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-534000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:30:12.040052    6467 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:30:12.047118    6467 out.go:177] * Starting "default-k8s-diff-port-534000" primary control-plane node in "default-k8s-diff-port-534000" cluster
	I0923 17:30:12.051176    6467 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:30:12.051193    6467 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:30:12.051197    6467 cache.go:56] Caching tarball of preloaded images
	I0923 17:30:12.051261    6467 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:30:12.051267    6467 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:30:12.051319    6467 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/default-k8s-diff-port-534000/config.json ...
	I0923 17:30:12.051724    6467 start.go:360] acquireMachinesLock for default-k8s-diff-port-534000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:30:12.051755    6467 start.go:364] duration metric: took 23.917µs to acquireMachinesLock for "default-k8s-diff-port-534000"
	I0923 17:30:12.051766    6467 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:30:12.051771    6467 fix.go:54] fixHost starting: 
	I0923 17:30:12.051901    6467 fix.go:112] recreateIfNeeded on default-k8s-diff-port-534000: state=Stopped err=<nil>
	W0923 17:30:12.051910    6467 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:30:12.056141    6467 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-534000" ...
	I0923 17:30:12.064194    6467 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:30:12.064232    6467 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:03:27:28:a7:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2
	I0923 17:30:12.066520    6467 main.go:141] libmachine: STDOUT: 
	I0923 17:30:12.066541    6467 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:30:12.066577    6467 fix.go:56] duration metric: took 14.804958ms for fixHost
	I0923 17:30:12.066583    6467 start.go:83] releasing machines lock for "default-k8s-diff-port-534000", held for 14.823459ms
	W0923 17:30:12.066592    6467 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:30:12.066630    6467 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:30:12.066635    6467 start.go:729] Will try again in 5 seconds ...
	I0923 17:30:17.068848    6467 start.go:360] acquireMachinesLock for default-k8s-diff-port-534000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:30:17.069300    6467 start.go:364] duration metric: took 326.417µs to acquireMachinesLock for "default-k8s-diff-port-534000"
	I0923 17:30:17.069441    6467 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:30:17.069461    6467 fix.go:54] fixHost starting: 
	I0923 17:30:17.070208    6467 fix.go:112] recreateIfNeeded on default-k8s-diff-port-534000: state=Stopped err=<nil>
	W0923 17:30:17.070234    6467 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:30:17.079801    6467 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-534000" ...
	I0923 17:30:17.083780    6467 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:30:17.084036    6467 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:03:27:28:a7:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/default-k8s-diff-port-534000/disk.qcow2
	I0923 17:30:17.093181    6467 main.go:141] libmachine: STDOUT: 
	I0923 17:30:17.093271    6467 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:30:17.093384    6467 fix.go:56] duration metric: took 23.919958ms for fixHost
	I0923 17:30:17.093407    6467 start.go:83] releasing machines lock for "default-k8s-diff-port-534000", held for 24.086625ms
	W0923 17:30:17.093625    6467 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-534000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-534000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:30:17.100651    6467 out.go:201] 
	W0923 17:30:17.104856    6467 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:30:17.104898    6467 out.go:270] * 
	* 
	W0923 17:30:17.107555    6467 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:30:17.114828    6467 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-534000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000: exit status 7 (69.732417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-872000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-872000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.182314083s)

                                                
                                                
-- stdout --
	* [newest-cni-872000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-872000" primary control-plane node in "newest-cni-872000" cluster
	* Restarting existing qemu2 VM for "newest-cni-872000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-872000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:30:14.379305    6490 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:30:14.379440    6490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:14.379443    6490 out.go:358] Setting ErrFile to fd 2...
	I0923 17:30:14.379446    6490 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:14.379583    6490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:30:14.380603    6490 out.go:352] Setting JSON to false
	I0923 17:30:14.396727    6490 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3577,"bootTime":1727134237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 17:30:14.396791    6490 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 17:30:14.401508    6490 out.go:177] * [newest-cni-872000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 17:30:14.408362    6490 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 17:30:14.408399    6490 notify.go:220] Checking for updates...
	I0923 17:30:14.415362    6490 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 17:30:14.418319    6490 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 17:30:14.421308    6490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 17:30:14.424269    6490 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 17:30:14.427301    6490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 17:30:14.430650    6490 config.go:182] Loaded profile config "newest-cni-872000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:30:14.430957    6490 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 17:30:14.434267    6490 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 17:30:14.441293    6490 start.go:297] selected driver: qemu2
	I0923 17:30:14.441303    6490 start.go:901] validating driver "qemu2" against &{Name:newest-cni-872000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-872000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:30:14.441359    6490 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 17:30:14.443630    6490 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0923 17:30:14.443657    6490 cni.go:84] Creating CNI manager for ""
	I0923 17:30:14.443682    6490 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 17:30:14.443706    6490 start.go:340] cluster config:
	{Name:newest-cni-872000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-872000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 17:30:14.447221    6490 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 17:30:14.454325    6490 out.go:177] * Starting "newest-cni-872000" primary control-plane node in "newest-cni-872000" cluster
	I0923 17:30:14.458197    6490 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 17:30:14.458215    6490 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 17:30:14.458227    6490 cache.go:56] Caching tarball of preloaded images
	I0923 17:30:14.458284    6490 preload.go:172] Found /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 17:30:14.458290    6490 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 17:30:14.458354    6490 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/newest-cni-872000/config.json ...
	I0923 17:30:14.458873    6490 start.go:360] acquireMachinesLock for newest-cni-872000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:30:14.458902    6490 start.go:364] duration metric: took 22.958µs to acquireMachinesLock for "newest-cni-872000"
	I0923 17:30:14.458912    6490 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:30:14.458917    6490 fix.go:54] fixHost starting: 
	I0923 17:30:14.459042    6490 fix.go:112] recreateIfNeeded on newest-cni-872000: state=Stopped err=<nil>
	W0923 17:30:14.459051    6490 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:30:14.463357    6490 out.go:177] * Restarting existing qemu2 VM for "newest-cni-872000" ...
	I0923 17:30:14.471316    6490 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:30:14.471348    6490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:bb:f3:2f:5b:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2
	I0923 17:30:14.473296    6490 main.go:141] libmachine: STDOUT: 
	I0923 17:30:14.473324    6490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:30:14.473352    6490 fix.go:56] duration metric: took 14.432833ms for fixHost
	I0923 17:30:14.473356    6490 start.go:83] releasing machines lock for "newest-cni-872000", held for 14.449833ms
	W0923 17:30:14.473362    6490 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:30:14.473394    6490 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:30:14.473399    6490 start.go:729] Will try again in 5 seconds ...
	I0923 17:30:19.475647    6490 start.go:360] acquireMachinesLock for newest-cni-872000: {Name:mkd669facc5f9c2096d5de154b6696859a5e6f32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 17:30:19.476143    6490 start.go:364] duration metric: took 371.542µs to acquireMachinesLock for "newest-cni-872000"
	I0923 17:30:19.476280    6490 start.go:96] Skipping create...Using existing machine configuration
	I0923 17:30:19.476301    6490 fix.go:54] fixHost starting: 
	I0923 17:30:19.477097    6490 fix.go:112] recreateIfNeeded on newest-cni-872000: state=Stopped err=<nil>
	W0923 17:30:19.477125    6490 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 17:30:19.485512    6490 out.go:177] * Restarting existing qemu2 VM for "newest-cni-872000" ...
	I0923 17:30:19.489491    6490 qemu.go:418] Using hvf for hardware acceleration
	I0923 17:30:19.489815    6490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:bb:f3:2f:5b:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19696-1109/.minikube/machines/newest-cni-872000/disk.qcow2
	I0923 17:30:19.499517    6490 main.go:141] libmachine: STDOUT: 
	I0923 17:30:19.499572    6490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 17:30:19.499668    6490 fix.go:56] duration metric: took 23.369083ms for fixHost
	I0923 17:30:19.499706    6490 start.go:83] releasing machines lock for "newest-cni-872000", held for 23.541833ms
	W0923 17:30:19.499905    6490 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-872000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-872000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 17:30:19.508532    6490 out.go:201] 
	W0923 17:30:19.511525    6490 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 17:30:19.511581    6490 out.go:270] * 
	* 
	W0923 17:30:19.514610    6490 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 17:30:19.521506    6490 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-872000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000: exit status 7 (71.058917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-534000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000: exit status 7 (32.307125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-534000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-534000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-534000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.229958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-534000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-534000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000: exit status 7 (28.84ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-534000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000: exit status 7 (29.617625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-534000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-534000 --alsologtostderr -v=1: exit status 83 (41.405667ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-534000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-534000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:30:17.386395    6509 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:30:17.386562    6509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:17.386566    6509 out.go:358] Setting ErrFile to fd 2...
	I0923 17:30:17.386568    6509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:17.386725    6509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:30:17.386931    6509 out.go:352] Setting JSON to false
	I0923 17:30:17.386940    6509 mustload.go:65] Loading cluster: default-k8s-diff-port-534000
	I0923 17:30:17.387169    6509 config.go:182] Loaded profile config "default-k8s-diff-port-534000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:30:17.390563    6509 out.go:177] * The control-plane node default-k8s-diff-port-534000 host is not running: state=Stopped
	I0923 17:30:17.394337    6509 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-534000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-534000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000: exit status 7 (29.360833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-534000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000: exit status 7 (29.284584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-872000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000: exit status 7 (30.38375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-872000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-872000 --alsologtostderr -v=1: exit status 83 (40.275125ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-872000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-872000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 17:30:19.709133    6533 out.go:345] Setting OutFile to fd 1 ...
	I0923 17:30:19.709291    6533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:19.709294    6533 out.go:358] Setting ErrFile to fd 2...
	I0923 17:30:19.709296    6533 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 17:30:19.709439    6533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 17:30:19.709669    6533 out.go:352] Setting JSON to false
	I0923 17:30:19.709676    6533 mustload.go:65] Loading cluster: newest-cni-872000
	I0923 17:30:19.709918    6533 config.go:182] Loaded profile config "newest-cni-872000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 17:30:19.713002    6533 out.go:177] * The control-plane node newest-cni-872000 host is not running: state=Stopped
	I0923 17:30:19.716711    6533 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-872000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-872000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000: exit status 7 (30.648208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-872000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000: exit status 7 (29.358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-872000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 14.63
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 200.16
29 TestAddons/serial/Volcano 38.25
31 TestAddons/serial/GCPAuth/Namespaces 0.09
34 TestAddons/parallel/Ingress 19.09
35 TestAddons/parallel/InspektorGadget 10.27
36 TestAddons/parallel/MetricsServer 5.29
38 TestAddons/parallel/CSI 39.64
39 TestAddons/parallel/Headlamp 16.65
40 TestAddons/parallel/CloudSpanner 5.2
41 TestAddons/parallel/LocalPath 41.96
42 TestAddons/parallel/NvidiaDevicePlugin 5.2
43 TestAddons/parallel/Yakd 10.29
44 TestAddons/StoppedEnableDisable 9.4
52 TestHyperKitDriverInstallOrUpdate 10.69
55 TestErrorSpam/setup 36.33
56 TestErrorSpam/start 0.34
57 TestErrorSpam/status 0.24
58 TestErrorSpam/pause 0.71
59 TestErrorSpam/unpause 0.62
60 TestErrorSpam/stop 55.26
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 48.79
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 38.26
67 TestFunctional/serial/KubeContext 0.03
68 TestFunctional/serial/KubectlGetPods 0.05
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.78
72 TestFunctional/serial/CacheCmd/cache/add_local 1.3
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
76 TestFunctional/serial/CacheCmd/cache/cache_reload 0.65
77 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/serial/MinikubeKubectlCmd 1.99
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.01
80 TestFunctional/serial/ExtraConfig 38.38
81 TestFunctional/serial/ComponentHealth 0.04
82 TestFunctional/serial/LogsCmd 0.7
83 TestFunctional/serial/LogsFileCmd 0.72
84 TestFunctional/serial/InvalidService 4.02
86 TestFunctional/parallel/ConfigCmd 0.23
87 TestFunctional/parallel/DashboardCmd 8.04
88 TestFunctional/parallel/DryRun 0.23
89 TestFunctional/parallel/InternationalLanguage 0.11
90 TestFunctional/parallel/StatusCmd 0.24
95 TestFunctional/parallel/AddonsCmd 0.1
96 TestFunctional/parallel/PersistentVolumeClaim 25.76
98 TestFunctional/parallel/SSHCmd 0.12
99 TestFunctional/parallel/CpCmd 0.44
101 TestFunctional/parallel/FileSync 0.07
102 TestFunctional/parallel/CertSync 0.38
106 TestFunctional/parallel/NodeLabels 0.04
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.11
110 TestFunctional/parallel/License 0.23
111 TestFunctional/parallel/Version/short 0.04
112 TestFunctional/parallel/Version/components 0.15
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
117 TestFunctional/parallel/ImageCommands/ImageBuild 2.08
118 TestFunctional/parallel/ImageCommands/Setup 1.83
119 TestFunctional/parallel/DockerEnv/bash 0.28
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
123 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.35
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.24
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.93
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.11
136 TestFunctional/parallel/ServiceCmd/List 0.12
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
139 TestFunctional/parallel/ServiceCmd/Format 0.09
140 TestFunctional/parallel/ServiceCmd/URL 0.1
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
148 TestFunctional/parallel/ProfileCmd/profile_list 0.14
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.13
150 TestFunctional/parallel/MountCmd/any-port 4.94
151 TestFunctional/parallel/MountCmd/specific-port 1
152 TestFunctional/parallel/MountCmd/VerifyCleanup 0.71
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 178.62
160 TestMultiControlPlane/serial/DeployApp 4.72
161 TestMultiControlPlane/serial/PingHostFromPods 0.73
162 TestMultiControlPlane/serial/AddWorkerNode 53.79
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.27
165 TestMultiControlPlane/serial/CopyFile 4.25
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.98
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 3.11
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.03
258 TestStoppedBinaryUpgrade/Setup 0.96
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
275 TestNoKubernetes/serial/ProfileList 31.34
276 TestNoKubernetes/serial/Stop 2.07
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
288 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
293 TestStartStop/group/old-k8s-version/serial/Stop 3.66
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
306 TestStartStop/group/no-preload/serial/Stop 1.91
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
311 TestStartStop/group/embed-certs/serial/Stop 3.26
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 4.01
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
331 TestStartStop/group/newest-cni/serial/Stop 2.9
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 16:37:03.413515    1596 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0923 16:37:03.413834    1596 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-711000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-711000: exit status 85 (97.792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-711000 | jenkins | v1.34.0 | 23 Sep 24 16:36 PDT |          |
	|         | -p download-only-711000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 16:36:46
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 16:36:46.170083    1600 out.go:345] Setting OutFile to fd 1 ...
	I0923 16:36:46.170239    1600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:36:46.170242    1600 out.go:358] Setting ErrFile to fd 2...
	I0923 16:36:46.170244    1600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:36:46.170360    1600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	W0923 16:36:46.170447    1600 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19696-1109/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19696-1109/.minikube/config/config.json: no such file or directory
	I0923 16:36:46.171701    1600 out.go:352] Setting JSON to true
	I0923 16:36:46.190315    1600 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":369,"bootTime":1727134237,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 16:36:46.190422    1600 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 16:36:46.195638    1600 out.go:97] [download-only-711000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 16:36:46.195781    1600 notify.go:220] Checking for updates...
	W0923 16:36:46.195802    1600 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 16:36:46.199578    1600 out.go:169] MINIKUBE_LOCATION=19696
	I0923 16:36:46.201201    1600 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 16:36:46.205670    1600 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 16:36:46.208714    1600 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 16:36:46.211603    1600 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	W0923 16:36:46.217651    1600 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 16:36:46.217876    1600 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 16:36:46.221503    1600 out.go:97] Using the qemu2 driver based on user configuration
	I0923 16:36:46.221522    1600 start.go:297] selected driver: qemu2
	I0923 16:36:46.221525    1600 start.go:901] validating driver "qemu2" against <nil>
	I0923 16:36:46.221591    1600 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 16:36:46.224677    1600 out.go:169] Automatically selected the socket_vmnet network
	I0923 16:36:46.230776    1600 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0923 16:36:46.230873    1600 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 16:36:46.230931    1600 cni.go:84] Creating CNI manager for ""
	I0923 16:36:46.230972    1600 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 16:36:46.231025    1600 start.go:340] cluster config:
	{Name:download-only-711000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-711000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 16:36:46.235887    1600 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 16:36:46.238541    1600 out.go:97] Downloading VM boot image ...
	I0923 16:36:46.238555    1600 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0923 16:36:55.471437    1600 out.go:97] Starting "download-only-711000" primary control-plane node in "download-only-711000" cluster
	I0923 16:36:55.471464    1600 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 16:36:55.528109    1600 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 16:36:55.528117    1600 cache.go:56] Caching tarball of preloaded images
	I0923 16:36:55.528284    1600 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 16:36:55.533420    1600 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 16:36:55.533427    1600 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 16:36:55.613043    1600 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 16:37:02.062404    1600 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 16:37:02.062574    1600 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 16:37:02.757773    1600 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 16:37:02.757978    1600 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/download-only-711000/config.json ...
	I0923 16:37:02.757994    1600 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/download-only-711000/config.json: {Name:mk62623163fd2442f60858d058e4f341b8f3d648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 16:37:02.758242    1600 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 16:37:02.758466    1600 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0923 16:37:03.362385    1600 out.go:193] 
	W0923 16:37:03.371479    1600 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19696-1109/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0 0x104dd96c0] Decompressors:map[bz2:0x14000120da0 gz:0x14000120da8 tar:0x14000120ce0 tar.bz2:0x14000120d10 tar.gz:0x14000120d40 tar.xz:0x14000120d50 tar.zst:0x14000120d60 tbz2:0x14000120d10 tgz:0x14000120d40 txz:0x14000120d50 tzst:0x14000120d60 xz:0x14000120dc0 zip:0x14000120dd0 zst:0x14000120dc8] Getters:map[file:0x14000714840 http:0x140006d2410 https:0x140006d2690] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0923 16:37:03.371519    1600 out_reason.go:110] 
	W0923 16:37:03.379377    1600 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 16:37:03.383298    1600 out.go:193] 
	
	
	* The control-plane node download-only-711000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-711000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-711000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (14.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-940000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-940000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (14.629829s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (14.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 16:37:18.404739    1596 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 16:37:18.404796    1596 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-940000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-940000: exit status 85 (83.718916ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-711000 | jenkins | v1.34.0 | 23 Sep 24 16:36 PDT |                     |
	|         | -p download-only-711000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT | 23 Sep 24 16:37 PDT |
	| delete  | -p download-only-711000        | download-only-711000 | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT | 23 Sep 24 16:37 PDT |
	| start   | -o=json --download-only        | download-only-940000 | jenkins | v1.34.0 | 23 Sep 24 16:37 PDT |                     |
	|         | -p download-only-940000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 16:37:03
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 16:37:03.802554    1628 out.go:345] Setting OutFile to fd 1 ...
	I0923 16:37:03.802689    1628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:37:03.802693    1628 out.go:358] Setting ErrFile to fd 2...
	I0923 16:37:03.802696    1628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:37:03.802819    1628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 16:37:03.803972    1628 out.go:352] Setting JSON to true
	I0923 16:37:03.820690    1628 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":386,"bootTime":1727134237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 16:37:03.820746    1628 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 16:37:03.825058    1628 out.go:97] [download-only-940000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 16:37:03.825161    1628 notify.go:220] Checking for updates...
	I0923 16:37:03.829028    1628 out.go:169] MINIKUBE_LOCATION=19696
	I0923 16:37:03.832042    1628 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 16:37:03.836038    1628 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 16:37:03.838986    1628 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 16:37:03.842052    1628 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	W0923 16:37:03.848042    1628 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 16:37:03.848193    1628 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 16:37:03.851000    1628 out.go:97] Using the qemu2 driver based on user configuration
	I0923 16:37:03.851010    1628 start.go:297] selected driver: qemu2
	I0923 16:37:03.851012    1628 start.go:901] validating driver "qemu2" against <nil>
	I0923 16:37:03.851054    1628 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 16:37:03.854046    1628 out.go:169] Automatically selected the socket_vmnet network
	I0923 16:37:03.859103    1628 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0923 16:37:03.859191    1628 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 16:37:03.859214    1628 cni.go:84] Creating CNI manager for ""
	I0923 16:37:03.859237    1628 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 16:37:03.859242    1628 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 16:37:03.859288    1628 start.go:340] cluster config:
	{Name:download-only-940000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-940000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 16:37:03.862724    1628 iso.go:125] acquiring lock: {Name:mkd0492d0b5a24ff029bb01ef60b15a1f33f6a03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 16:37:03.866008    1628 out.go:97] Starting "download-only-940000" primary control-plane node in "download-only-940000" cluster
	I0923 16:37:03.866014    1628 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 16:37:03.924828    1628 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 16:37:03.924846    1628 cache.go:56] Caching tarball of preloaded images
	I0923 16:37:03.925024    1628 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 16:37:03.929092    1628 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 16:37:03.929099    1628 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0923 16:37:04.014250    1628 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19696-1109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-940000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-940000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-940000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-938000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-938000: exit status 85 (63.620125ms)

                                                
                                                
-- stdout --
	* Profile "addons-938000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-938000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-938000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-938000: exit status 85 (67.525458ms)

                                                
                                                
-- stdout --
	* Profile "addons-938000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-938000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (200.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-938000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-938000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m20.157586125s)
--- PASS: TestAddons/Setup (200.16s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 8.034167ms
addons_test.go:843: volcano-admission stabilized in 8.077709ms
addons_test.go:835: volcano-scheduler stabilized in 8.096584ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-xnzv8" [9dc73d10-00ba-4221-bdf3-9cf02dc33042] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00440625s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-tgsr2" [77e84c78-f742-4bd1-b783-b327046ec52a] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.006985208s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-8b4dp" [f4878237-1227-4a1f-8a2a-d3707710d12f] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004813209s
addons_test.go:870: (dbg) Run:  kubectl --context addons-938000 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-938000 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-938000 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [adf8bed3-c13c-4825-850f-a75969b617c8] Pending
helpers_test.go:344: "test-job-nginx-0" [adf8bed3-c13c-4825-850f-a75969b617c8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [adf8bed3-c13c-4825-850f-a75969b617c8] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.005239708s
addons_test.go:906: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-darwin-arm64 -p addons-938000 addons disable volcano --alsologtostderr -v=1: (10.004362417s)
--- PASS: TestAddons/serial/Volcano (38.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-938000 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-938000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-938000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-938000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-938000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0a5266b5-964c-488d-a6dc-8fd0bf3ef827] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0a5266b5-964c-488d-a6dc-8fd0bf3ef827] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.011499208s
I0923 16:50:40.420616    1596 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-938000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-darwin-arm64 -p addons-938000 addons disable ingress-dns --alsologtostderr -v=1: (1.280946417s)
addons_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-darwin-arm64 -p addons-938000 addons disable ingress --alsologtostderr -v=1: (7.201109667s)
--- PASS: TestAddons/parallel/Ingress (19.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wh9tp" [54f1b879-64c5-4578-94e6-e625ef207af0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011768166s
addons_test.go:789: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-938000
addons_test.go:789: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-938000: (5.253957625s)
--- PASS: TestAddons/parallel/InspektorGadget (10.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.233583ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-9x8cv" [1fb0a34a-415b-42fa-8601-24f114da22ab] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009137334s
addons_test.go:413: (dbg) Run:  kubectl --context addons-938000 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 16:49:35.462123    1596 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 16:49:35.478706    1596 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 16:49:35.478718    1596 kapi.go:107] duration metric: took 16.625792ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 16.640791ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-938000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-938000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a2c27348-b10c-4c50-97a4-54c8ae5fda83] Pending
helpers_test.go:344: "task-pv-pod" [a2c27348-b10c-4c50-97a4-54c8ae5fda83] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a2c27348-b10c-4c50-97a4-54c8ae5fda83] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.009671125s
addons_test.go:528: (dbg) Run:  kubectl --context addons-938000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-938000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-938000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-938000 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-938000 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-938000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-938000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bc5223ad-044f-4d6b-82aa-abf930a3275f] Pending
helpers_test.go:344: "task-pv-pod-restore" [bc5223ad-044f-4d6b-82aa-abf930a3275f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bc5223ad-044f-4d6b-82aa-abf930a3275f] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.010345125s
addons_test.go:570: (dbg) Run:  kubectl --context addons-938000 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-938000 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-938000 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-darwin-arm64 -p addons-938000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.127256292s)
addons_test.go:586: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-938000 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-vrvk7" [2c72fe90-c13b-4730-8be8-7cd1b9d37604] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-vrvk7" [2c72fe90-c13b-4730-8be8-7cd1b9d37604] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.010877083s
addons_test.go:777: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-darwin-arm64 -p addons-938000 addons disable headlamp --alsologtostderr -v=1: (5.2924925s)
--- PASS: TestAddons/parallel/Headlamp (16.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-hzk8n" [b935ff01-ef88-4f09-a2d3-f3a6af25c6f8] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005969916s
addons_test.go:808: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-938000
--- PASS: TestAddons/parallel/CloudSpanner (5.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (41.96s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-938000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-938000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-938000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a4c5d346-894f-4383-ab9d-481e9129390d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a4c5d346-894f-4383-ab9d-481e9129390d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a4c5d346-894f-4383-ab9d-481e9129390d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005468417s
addons_test.go:938: (dbg) Run:  kubectl --context addons-938000 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 ssh "cat /opt/local-path-provisioner/pvc-553194d9-ed8a-44df-b335-19773dfba305_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-938000 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-938000 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-darwin-arm64 -p addons-938000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.475814958s)
--- PASS: TestAddons/parallel/LocalPath (41.96s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-42w2d" [6219d98f-b5fd-406b-9358-a0e23f30e6ab] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.009953167s
addons_test.go:1002: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-938000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.20s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-7tdsj" [96510010-0deb-41e5-8c36-73f6ebe000da] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004054333s
addons_test.go:1014: (dbg) Run:  out/minikube-darwin-arm64 -p addons-938000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-darwin-arm64 -p addons-938000 addons disable yakd --alsologtostderr -v=1: (5.281444792s)
--- PASS: TestAddons/parallel/Yakd (10.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (9.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-938000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-938000: (9.205454s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-938000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-938000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-938000
--- PASS: TestAddons/StoppedEnableDisable (9.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.69s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0923 17:15:39.949843    1596 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 17:15:39.950035    1596 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0923 17:15:41.887180    1596 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0923 17:15:41.887619    1596 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0923 17:15:41.887828    1596 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/001/docker-machine-driver-hyperkit
I0923 17:15:42.400845    1596 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10470ad40 0x10470ad40 0x10470ad40 0x10470ad40 0x10470ad40 0x10470ad40 0x10470ad40] Decompressors:map[bz2:0x1400012b820 gz:0x1400012b828 tar:0x1400012b760 tar.bz2:0x1400012b7b0 tar.gz:0x1400012b7c0 tar.xz:0x1400012b7d0 tar.zst:0x1400012b7e0 tbz2:0x1400012b7b0 tgz:0x1400012b7c0 txz:0x1400012b7d0 tzst:0x1400012b7e0 xz:0x1400012b840 zip:0x1400012b870 zst:0x1400012b848] Getters:map[file:0x140017c1e40 http:0x140006d21e0 https:0x140006d2230] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0923 17:15:42.400977    1596 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1942664441/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.69s)

                                                
                                    
x
+
TestErrorSpam/setup (36.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-646000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-646000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 --driver=qemu2 : (36.328690333s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (36.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 pause
--- PASS: TestErrorSpam/pause (0.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (55.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 stop: (3.170836958s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 stop: (26.058332s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-646000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-646000 stop: (26.030302375s)
--- PASS: TestErrorSpam/stop (55.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19696-1109/.minikube/files/etc/test/nested/copy/1596/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.79s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-496000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-496000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (48.786974209s)
--- PASS: TestFunctional/serial/StartWithProxy (48.79s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 16:54:01.274540    1596 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-496000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-496000 --alsologtostderr -v=8: (38.2638435s)
functional_test.go:663: soft start took 38.264266292s for "functional-496000" cluster.
I0923 16:54:39.537369    1596 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (38.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-496000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-496000 cache add registry.k8s.io/pause:3.1: (1.062618875s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1136934529/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 cache add minikube-local-cache-test:functional-496000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 cache delete minikube-local-cache-test:functional-496000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-496000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-496000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (67.610583ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 kubectl -- --context functional-496000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-496000 kubectl -- --context functional-496000 get pods: (1.986099s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.99s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-496000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-496000 get pods: (1.013901584s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-496000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-496000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.382390792s)
functional_test.go:761: restart took 38.382487708s for "functional-496000" cluster.
I0923 16:55:25.942772    1596 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-496000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3943207804/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-496000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-496000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-496000: exit status 115 (144.536958ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31523 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-496000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-496000 config get cpus: exit status 14 (33.9015ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-496000 config get cpus: exit status 14 (31.715375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-496000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-496000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2728: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-496000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-496000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (122.9625ms)

                                                
                                                
-- stdout --
	* [functional-496000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 16:56:17.379186    2711 out.go:345] Setting OutFile to fd 1 ...
	I0923 16:56:17.379304    2711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:56:17.379307    2711 out.go:358] Setting ErrFile to fd 2...
	I0923 16:56:17.379309    2711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:56:17.379444    2711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 16:56:17.380509    2711 out.go:352] Setting JSON to false
	I0923 16:56:17.398602    2711 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1540,"bootTime":1727134237,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 16:56:17.398681    2711 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 16:56:17.403487    2711 out.go:177] * [functional-496000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 16:56:17.412493    2711 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 16:56:17.412548    2711 notify.go:220] Checking for updates...
	I0923 16:56:17.422404    2711 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 16:56:17.426391    2711 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 16:56:17.429382    2711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 16:56:17.432365    2711 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 16:56:17.435274    2711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 16:56:17.438598    2711 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 16:56:17.438867    2711 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 16:56:17.443392    2711 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 16:56:17.450400    2711 start.go:297] selected driver: qemu2
	I0923 16:56:17.450407    2711 start.go:901] validating driver "qemu2" against &{Name:functional-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 16:56:17.450453    2711 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 16:56:17.457369    2711 out.go:201] 
	W0923 16:56:17.461401    2711 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 16:56:17.464378    2711 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-496000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-496000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-496000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.130375ms)

                                                
                                                
-- stdout --
	* [functional-496000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 16:56:17.603104    2722 out.go:345] Setting OutFile to fd 1 ...
	I0923 16:56:17.603210    2722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:56:17.603213    2722 out.go:358] Setting ErrFile to fd 2...
	I0923 16:56:17.603215    2722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 16:56:17.603338    2722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
	I0923 16:56:17.604738    2722 out.go:352] Setting JSON to false
	I0923 16:56:17.621628    2722 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1540,"bootTime":1727134237,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0923 16:56:17.621728    2722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 16:56:17.626408    2722 out.go:177] * [functional-496000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0923 16:56:17.633397    2722 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 16:56:17.633503    2722 notify.go:220] Checking for updates...
	I0923 16:56:17.640397    2722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	I0923 16:56:17.643367    2722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 16:56:17.646440    2722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 16:56:17.650401    2722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	I0923 16:56:17.653355    2722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 16:56:17.656677    2722 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 16:56:17.656949    2722 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 16:56:17.661340    2722 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0923 16:56:17.669460    2722 start.go:297] selected driver: qemu2
	I0923 16:56:17.669466    2722 start.go:901] validating driver "qemu2" against &{Name:functional-496000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-496000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 16:56:17.669520    2722 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 16:56:17.676393    2722 out.go:201] 
	W0923 16:56:17.680415    2722 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 16:56:17.684407    2722 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c407b490-6bd5-4989-b300-56a18eaba785] Running
E0923 16:55:44.469997    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012164958s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-496000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-496000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-496000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-496000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [27c7312f-934e-40bc-bdd9-a2cbc09b2551] Pending
E0923 16:55:49.593411    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [27c7312f-934e-40bc-bdd9-a2cbc09b2551] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [27c7312f-934e-40bc-bdd9-a2cbc09b2551] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.009773125s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-496000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-496000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-496000 delete -f testdata/storage-provisioner/pod.yaml: (1.228358417s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-496000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [21e3b52e-9e66-4eed-ac80-1513b6d5e15e] Pending
helpers_test.go:344: "sp-pod" [21e3b52e-9e66-4eed-ac80-1513b6d5e15e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [21e3b52e-9e66-4eed-ac80-1513b6d5e15e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.008271208s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-496000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh -n functional-496000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 cp functional-496000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3928553965/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh -n functional-496000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh -n functional-496000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1596/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "sudo cat /etc/test/nested/copy/1596/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1596.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "sudo cat /etc/ssl/certs/1596.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1596.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "sudo cat /usr/share/ca-certificates/1596.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "sudo cat /etc/ssl/certs/15962.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "sudo cat /usr/share/ca-certificates/15962.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-496000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-496000 ssh "sudo systemctl is-active crio": exit status 1 (105.323875ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image ls --format short --alsologtostderr
E0923 16:56:20.318196    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-496000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-496000
docker.io/kicbase/echo-server:functional-496000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-496000 image ls --format short --alsologtostderr:
I0923 16:56:20.343026    2753 out.go:345] Setting OutFile to fd 1 ...
I0923 16:56:20.343211    2753 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 16:56:20.343217    2753 out.go:358] Setting ErrFile to fd 2...
I0923 16:56:20.343219    2753 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 16:56:20.343345    2753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
I0923 16:56:20.343875    2753 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 16:56:20.343948    2753 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 16:56:20.344772    2753 ssh_runner.go:195] Run: systemctl --version
I0923 16:56:20.344781    2753 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/functional-496000/id_rsa Username:docker}
I0923 16:56:20.367786    2753 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-496000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/minikube-local-cache-test | functional-496000 | c3ef9db47814d | 30B    |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| docker.io/kicbase/echo-server               | functional-496000 | ce2d2cda2d858 | 4.78MB |
| localhost/my-image                          | functional-496000 | 27128fecc5d75 | 1.41MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-496000 image ls --format table --alsologtostderr:
I0923 16:56:22.639247    2767 out.go:345] Setting OutFile to fd 1 ...
I0923 16:56:22.639411    2767 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 16:56:22.639415    2767 out.go:358] Setting ErrFile to fd 2...
I0923 16:56:22.639417    2767 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 16:56:22.639558    2767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
I0923 16:56:22.640005    2767 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 16:56:22.640074    2767 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 16:56:22.640979    2767 ssh_runner.go:195] Run: systemctl --version
I0923 16:56:22.640987    2767 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/functional-496000/id_rsa Username:docker}
I0923 16:56:22.664013    2767 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/09/23 16:56:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-496000 image ls --format json --alsologtostderr:
[{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"27128fecc5d75b34a54a205d2e075fcd35b662c2299d8327697312a564f46d4f","repoDigests":[],"repoTags":["localhost/my-image:functional-496000"],"size":"1410000"},{"id":"c3ef9db47814d3c12a8237d94d242f6d2745df05f7e88cf7fa0d66b84ed1062
3","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-496000"],"size":"30"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.
k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-496000"],"size":"4780000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-496000 image ls --format json --alsologtostderr:
I0923 16:56:22.569741    2765 out.go:345] Setting OutFile to fd 1 ...
I0923 16:56:22.569877    2765 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 16:56:22.569882    2765 out.go:358] Setting ErrFile to fd 2...
I0923 16:56:22.569884    2765 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 16:56:22.570039    2765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
I0923 16:56:22.570479    2765 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 16:56:22.570539    2765 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 16:56:22.571424    2765 ssh_runner.go:195] Run: systemctl --version
I0923 16:56:22.571435    2765 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/functional-496000/id_rsa Username:docker}
I0923 16:56:22.594934    2765 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-496000 image ls --format yaml --alsologtostderr:
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c3ef9db47814d3c12a8237d94d242f6d2745df05f7e88cf7fa0d66b84ed10623
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-496000
size: "30"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-496000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-496000 image ls --format yaml --alsologtostderr:
I0923 16:56:20.411521    2755 out.go:345] Setting OutFile to fd 1 ...
I0923 16:56:20.411689    2755 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 16:56:20.411693    2755 out.go:358] Setting ErrFile to fd 2...
I0923 16:56:20.411696    2755 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 16:56:20.411815    2755 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
I0923 16:56:20.412285    2755 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 16:56:20.412345    2755 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 16:56:20.413268    2755 ssh_runner.go:195] Run: systemctl --version
I0923 16:56:20.413277    2755 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/functional-496000/id_rsa Username:docker}
I0923 16:56:20.442751    2755 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-496000 ssh pgrep buildkitd: exit status 1 (62.492125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image build -t localhost/my-image:functional-496000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-496000 image build -t localhost/my-image:functional-496000 testdata/build --alsologtostderr: (1.949091166s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-496000 image build -t localhost/my-image:functional-496000 testdata/build --alsologtostderr:
I0923 16:56:20.550081    2759 out.go:345] Setting OutFile to fd 1 ...
I0923 16:56:20.550301    2759 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 16:56:20.550306    2759 out.go:358] Setting ErrFile to fd 2...
I0923 16:56:20.550308    2759 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 16:56:20.550446    2759 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19696-1109/.minikube/bin
I0923 16:56:20.550908    2759 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 16:56:20.551637    2759 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 16:56:20.552550    2759 ssh_runner.go:195] Run: systemctl --version
I0923 16:56:20.552557    2759 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19696-1109/.minikube/machines/functional-496000/id_rsa Username:docker}
I0923 16:56:20.582244    2759 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1409786704.tar
I0923 16:56:20.582332    2759 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 16:56:20.591219    2759 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1409786704.tar
I0923 16:56:20.594407    2759 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1409786704.tar: stat -c "%s %y" /var/lib/minikube/build/build.1409786704.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1409786704.tar': No such file or directory
I0923 16:56:20.594422    2759 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1409786704.tar --> /var/lib/minikube/build/build.1409786704.tar (3072 bytes)
I0923 16:56:20.608884    2759 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1409786704
I0923 16:56:20.618171    2759 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1409786704 -xf /var/lib/minikube/build/build.1409786704.tar
I0923 16:56:20.625744    2759 docker.go:360] Building image: /var/lib/minikube/build/build.1409786704
I0923 16:56:20.625810    2759 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-496000 /var/lib/minikube/build/build.1409786704
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:27128fecc5d75b34a54a205d2e075fcd35b662c2299d8327697312a564f46d4f done
#8 naming to localhost/my-image:functional-496000 done
#8 DONE 0.1s
I0923 16:56:22.451542    2759 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-496000 /var/lib/minikube/build/build.1409786704: (1.825777541s)
I0923 16:56:22.451626    2759 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1409786704
I0923 16:56:22.455798    2759 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1409786704.tar
I0923 16:56:22.459541    2759 build_images.go:217] Built localhost/my-image:functional-496000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1409786704.tar
I0923 16:56:22.459558    2759 build_images.go:133] succeeded building to: functional-496000
I0923 16:56:22.459562    2759 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.806792125s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-496000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-496000 docker-env) && out/minikube-darwin-arm64 status -p functional-496000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-496000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-496000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-496000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-f284x" [9190e9b5-ea27-4acd-a3b9-be8ec40be992] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-f284x" [9190e9b5-ea27-4acd-a3b9-be8ec40be992] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0923 16:55:39.324389    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 16:55:39.331236    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 16:55:39.344676    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 16:55:39.368091    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 16:55:39.411596    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 16:55:39.493472    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 16:55:39.656963    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 16:55:39.979728    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 16:55:40.623143    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 16:55:41.906580    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.007889584s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image load --daemon kicbase/echo-server:functional-496000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image load --daemon kicbase/echo-server:functional-496000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-496000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image load --daemon kicbase/echo-server:functional-496000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image save kicbase/echo-server:functional-496000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image rm kicbase/echo-server:functional-496000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-496000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 image save --daemon kicbase/echo-server:functional-496000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-496000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-496000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-496000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-496000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-496000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2566: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-496000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-496000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c9aeff12-8dad-421a-8ef6-91abeee6f22a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c9aeff12-8dad-421a-8ef6-91abeee6f22a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004660792s
I0923 16:55:47.632407    1596 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 service list -o json
functional_test.go:1494: Took "79.808292ms" to run "out/minikube-darwin-arm64 -p functional-496000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31927
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31927
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-496000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.128.43 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0923 16:55:47.694169    1596 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0923 16:55:47.733921    1596 config.go:182] Loaded profile config "functional-496000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-496000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "102.542667ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "36.010667ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "92.475083ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.713375ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4214617441/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727135770710016000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4214617441/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727135770710016000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4214617441/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727135770710016000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4214617441/001/test-1727135770710016000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-496000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (56.565583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 16:56:10.767088    1596 retry.go:31] will retry after 260.574272ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 23:56 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 23:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 23:56 test-1727135770710016000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh cat /mount-9p/test-1727135770710016000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-496000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [983ee093-2fc4-4ba7-8e1a-2da1bb03b953] Pending
helpers_test.go:344: "busybox-mount" [983ee093-2fc4-4ba7-8e1a-2da1bb03b953] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [983ee093-2fc4-4ba7-8e1a-2da1bb03b953] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [983ee093-2fc4-4ba7-8e1a-2da1bb03b953] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00396525s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-496000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port4214617441/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1054151009/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-496000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.798209ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 16:56:15.710087    1596 retry.go:31] will retry after 515.080997ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1054151009/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-496000 ssh "sudo umount -f /mount-9p": exit status 1 (60.738792ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-496000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1054151009/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3600535020/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3600535020/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3600535020/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-496000 ssh "findmnt -T" /mount1: exit status 1 (81.892375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 16:56:16.731389    1596 retry.go:31] will retry after 366.720095ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-496000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-496000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3600535020/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3600535020/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-496000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3600535020/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-496000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-496000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-496000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (178.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-515000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0923 16:57:01.279125    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
E0923 16:58:23.199319    1596 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19696-1109/.minikube/profiles/addons-938000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-515000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m58.423974542s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (178.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-515000 -- rollout status deployment/busybox: (3.283905375s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-9h65v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-gjt55 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-j7dfs -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-9h65v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-gjt55 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-j7dfs -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-9h65v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-gjt55 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-j7dfs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-9h65v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-9h65v -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-gjt55 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-gjt55 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-j7dfs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-515000 -- exec busybox-7dff88458-j7dfs -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-515000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-515000 -v=7 --alsologtostderr: (53.572855959s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-515000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp testdata/cp-test.txt ha-515000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1128153755/001/cp-test_ha-515000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000:/home/docker/cp-test.txt ha-515000-m02:/home/docker/cp-test_ha-515000_ha-515000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m02 "sudo cat /home/docker/cp-test_ha-515000_ha-515000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000:/home/docker/cp-test.txt ha-515000-m03:/home/docker/cp-test_ha-515000_ha-515000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m03 "sudo cat /home/docker/cp-test_ha-515000_ha-515000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000:/home/docker/cp-test.txt ha-515000-m04:/home/docker/cp-test_ha-515000_ha-515000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m04 "sudo cat /home/docker/cp-test_ha-515000_ha-515000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp testdata/cp-test.txt ha-515000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1128153755/001/cp-test_ha-515000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m02:/home/docker/cp-test.txt ha-515000:/home/docker/cp-test_ha-515000-m02_ha-515000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000 "sudo cat /home/docker/cp-test_ha-515000-m02_ha-515000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m02:/home/docker/cp-test.txt ha-515000-m03:/home/docker/cp-test_ha-515000-m02_ha-515000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m03 "sudo cat /home/docker/cp-test_ha-515000-m02_ha-515000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m02:/home/docker/cp-test.txt ha-515000-m04:/home/docker/cp-test_ha-515000-m02_ha-515000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m04 "sudo cat /home/docker/cp-test_ha-515000-m02_ha-515000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp testdata/cp-test.txt ha-515000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1128153755/001/cp-test_ha-515000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m03:/home/docker/cp-test.txt ha-515000:/home/docker/cp-test_ha-515000-m03_ha-515000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000 "sudo cat /home/docker/cp-test_ha-515000-m03_ha-515000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m03:/home/docker/cp-test.txt ha-515000-m02:/home/docker/cp-test_ha-515000-m03_ha-515000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m02 "sudo cat /home/docker/cp-test_ha-515000-m03_ha-515000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m03:/home/docker/cp-test.txt ha-515000-m04:/home/docker/cp-test_ha-515000-m03_ha-515000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m04 "sudo cat /home/docker/cp-test_ha-515000-m03_ha-515000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp testdata/cp-test.txt ha-515000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1128153755/001/cp-test_ha-515000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m04:/home/docker/cp-test.txt ha-515000:/home/docker/cp-test_ha-515000-m04_ha-515000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000 "sudo cat /home/docker/cp-test_ha-515000-m04_ha-515000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m04:/home/docker/cp-test.txt ha-515000-m02:/home/docker/cp-test_ha-515000-m04_ha-515000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m02 "sudo cat /home/docker/cp-test_ha-515000-m04_ha-515000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 cp ha-515000-m04:/home/docker/cp-test.txt ha-515000-m03:/home/docker/cp-test_ha-515000-m04_ha-515000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-515000 ssh -n ha-515000-m03 "sudo cat /home/docker/cp-test_ha-515000-m04_ha-515000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2.976459958s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-945000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-945000 --output=json --user=testUser: (3.107565125s)
--- PASS: TestJSONOutput/stop/Command (3.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-869000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-869000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.726959ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2c6c76fd-e2b7-4ebd-a45e-b3e937a99f6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-869000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a1518be-ec0a-48d2-bb88-4a065ec37df2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19696"}}
	{"specversion":"1.0","id":"bc197b71-8306-42ed-864e-7444fdfc877a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig"}}
	{"specversion":"1.0","id":"bc90ea73-47e9-4e99-886f-80ae06081930","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4cec1b16-9f11-4082-9644-88cfdef7086c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"62086806-9305-41fc-a3ed-387015a133d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube"}}
	{"specversion":"1.0","id":"4489bb48-47eb-4f78-9e90-e62412d19a21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"20053603-6ea7-4290-9232-6486439ed5f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-869000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-869000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-629000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-629000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.733542ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-629000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19696-1109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19696-1109/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-629000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-629000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.741542ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-629000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-629000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.67966075s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.660674375s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-629000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-629000: (2.070749667s)
--- PASS: TestNoKubernetes/serial/Stop (2.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-629000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-629000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.64275ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-629000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-629000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-180000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-908000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-908000 --alsologtostderr -v=3: (3.65705875s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-908000 -n old-k8s-version-908000: exit status 7 (58.202792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-908000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-117000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-117000 --alsologtostderr -v=3: (1.911449708s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-117000 -n no-preload-117000: exit status 7 (54.809625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-117000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-360000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-360000 --alsologtostderr -v=3: (3.264106708s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-360000 -n embed-certs-360000: exit status 7 (56.969917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-360000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (4.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-534000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-534000 --alsologtostderr -v=3: (4.011167042s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (4.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-872000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-872000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-872000 --alsologtostderr -v=3: (2.901315334s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-534000 -n default-k8s-diff-port-534000: exit status 7 (55.0385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-534000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-872000 -n newest-cni-872000: exit status 7 (55.788459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-872000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-780000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-780000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-780000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-780000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-780000"

                                                
                                                
----------------------- debugLogs end: cilium-780000 [took: 2.18604875s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-780000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-780000
--- SKIP: TestNetworkPlugins/group/cilium (2.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-597000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-597000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard