Test Report: QEMU_macOS 19711

                    
                      f2dddbc2cec1d99a0bb3d71de73f46a47f499a62:2024-09-26:36389
                    
                

Test fail (98/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.04
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.94
33 TestAddons/parallel/Registry 71.3
45 TestCertOptions 10.22
46 TestCertExpiration 195.28
47 TestDockerFlags 10.02
48 TestForceSystemdFlag 10.31
49 TestForceSystemdEnv 11.37
94 TestFunctional/parallel/ServiceCmdConnect 38.98
166 TestMultiControlPlane/serial/StopSecondaryNode 64.12
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 51.94
168 TestMultiControlPlane/serial/RestartSecondaryNode 87.05
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.36
171 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
173 TestMultiControlPlane/serial/StopCluster 202.08
174 TestMultiControlPlane/serial/RestartCluster 5.26
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
176 TestMultiControlPlane/serial/AddSecondaryNode 0.07
180 TestImageBuild/serial/Setup 10.04
183 TestJSONOutput/start/Command 9.78
189 TestJSONOutput/pause/Command 0.08
195 TestJSONOutput/unpause/Command 0.04
212 TestMinikubeProfile 10.08
215 TestMountStart/serial/StartWithMountFirst 10.1
218 TestMultiNode/serial/FreshStart2Nodes 9.96
219 TestMultiNode/serial/DeployApp2Nodes 106.79
220 TestMultiNode/serial/PingHostFrom2Pods 0.09
221 TestMultiNode/serial/AddNode 0.07
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.08
224 TestMultiNode/serial/CopyFile 0.06
225 TestMultiNode/serial/StopNode 0.14
226 TestMultiNode/serial/StartAfterStop 52.5
227 TestMultiNode/serial/RestartKeepsNodes 8.68
228 TestMultiNode/serial/DeleteNode 0.1
229 TestMultiNode/serial/StopMultiNode 2.17
230 TestMultiNode/serial/RestartMultiNode 5.25
231 TestMultiNode/serial/ValidateNameConflict 20.08
235 TestPreload 10.15
237 TestScheduledStopUnix 10.29
238 TestSkaffold 12.68
241 TestRunningBinaryUpgrade 596.19
243 TestKubernetesUpgrade 18.76
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.46
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.1
259 TestStoppedBinaryUpgrade/Upgrade 576.45
261 TestPause/serial/Start 9.99
271 TestNoKubernetes/serial/StartWithK8s 9.89
272 TestNoKubernetes/serial/StartWithStopK8s 5.3
273 TestNoKubernetes/serial/Start 5.3
277 TestNoKubernetes/serial/StartNoArgs 5.33
279 TestNetworkPlugins/group/auto/Start 9.84
280 TestNetworkPlugins/group/kindnet/Start 9.72
281 TestNetworkPlugins/group/calico/Start 10.01
282 TestNetworkPlugins/group/custom-flannel/Start 9.9
283 TestNetworkPlugins/group/false/Start 9.89
284 TestNetworkPlugins/group/enable-default-cni/Start 9.83
285 TestNetworkPlugins/group/flannel/Start 10.14
286 TestNetworkPlugins/group/bridge/Start 9.81
287 TestNetworkPlugins/group/kubenet/Start 9.87
290 TestStartStop/group/old-k8s-version/serial/FirstStart 10.12
291 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
295 TestStartStop/group/old-k8s-version/serial/SecondStart 5.23
296 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
299 TestStartStop/group/old-k8s-version/serial/Pause 0.1
301 TestStartStop/group/embed-certs/serial/FirstStart 9.94
302 TestStartStop/group/embed-certs/serial/DeployApp 0.09
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
306 TestStartStop/group/embed-certs/serial/SecondStart 5.45
308 TestStartStop/group/no-preload/serial/FirstStart 10.05
309 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
310 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
311 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
312 TestStartStop/group/embed-certs/serial/Pause 0.11
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.96
315 TestStartStop/group/no-preload/serial/DeployApp 0.09
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
319 TestStartStop/group/no-preload/serial/SecondStart 5.75
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.65
325 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
326 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
327 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
328 TestStartStop/group/no-preload/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/FirstStart 10.18
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
333 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
339 TestStartStop/group/newest-cni/serial/SecondStart 5.25
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
343 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (19.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-085000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-085000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (19.039301417s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d140c61e-a8d6-492e-854c-6c3076920ef0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-085000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"550e22c8-ba94-46cf-9e57-b486ac00ad11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"5003d6c9-d795-4702-b192-6b3561260a24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig"}}
	{"specversion":"1.0","id":"acddb5f7-5546-4b8e-adbf-0cc65ef5f99f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"f2aefed3-2026-44e9-9144-740ae46cda8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5b4de7f9-adca-4828-84a3-13c44c995e46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube"}}
	{"specversion":"1.0","id":"29cf5693-0bbf-4d6b-a738-0707424bce1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"925a996d-38b4-4bdc-9d5f-3a2bb07dfc55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c249fdb-8a2f-4f6b-9f8e-af93c3f6f56b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"b987c977-6a5f-4965-af30-faa24231e652","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"db641a90-3d60-4dde-8c4b-851949f6b679","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-085000\" primary control-plane node in \"download-only-085000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f767640-7c2c-4190-a099-9c5e842ca818","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b201c1b5-d0f3-4542-bda8-637f20be8216","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0] Decompressors:map[bz2:0x140004871d0 gz:0x140004871d8 tar:0x14000487180 tar.bz2:0x14000487190 tar.gz:0x140004871a0 tar.xz:0x140004871b0 tar.zst:0x140004871c0 tbz2:0x14000487190 tgz:0x14
0004871a0 txz:0x140004871b0 tzst:0x140004871c0 xz:0x140004871e0 zip:0x140004871f0 zst:0x140004871e8] Getters:map[file:0x1400078a6f0 http:0x140001520a0 https:0x14000152230] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"4d8688e2-d15d-4b05-82eb-a2ab6ec57bcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:13:50.493756    1598 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:13:50.493900    1598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:13:50.493903    1598 out.go:358] Setting ErrFile to fd 2...
	I0926 17:13:50.493906    1598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:13:50.494042    1598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	W0926 17:13:50.494134    1598 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19711-1075/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19711-1075/.minikube/config/config.json: no such file or directory
	I0926 17:13:50.495413    1598 out.go:352] Setting JSON to true
	I0926 17:13:50.512795    1598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":793,"bootTime":1727395237,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:13:50.512856    1598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:13:50.517397    1598 out.go:97] [download-only-085000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:13:50.517530    1598 notify.go:220] Checking for updates...
	W0926 17:13:50.517580    1598 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball: no such file or directory
	I0926 17:13:50.520281    1598 out.go:169] MINIKUBE_LOCATION=19711
	I0926 17:13:50.523329    1598 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:13:50.527353    1598 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:13:50.530332    1598 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:13:50.533363    1598 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	W0926 17:13:50.537845    1598 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 17:13:50.538122    1598 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:13:50.543363    1598 out.go:97] Using the qemu2 driver based on user configuration
	I0926 17:13:50.543383    1598 start.go:297] selected driver: qemu2
	I0926 17:13:50.543397    1598 start.go:901] validating driver "qemu2" against <nil>
	I0926 17:13:50.543485    1598 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:13:50.546284    1598 out.go:169] Automatically selected the socket_vmnet network
	I0926 17:13:50.552005    1598 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0926 17:13:50.552088    1598 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 17:13:50.552138    1598 cni.go:84] Creating CNI manager for ""
	I0926 17:13:50.552169    1598 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0926 17:13:50.552214    1598 start.go:340] cluster config:
	{Name:download-only-085000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-085000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:13:50.557364    1598 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:13:50.561320    1598 out.go:97] Downloading VM boot image ...
	I0926 17:13:50.561334    1598 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0926 17:13:59.943400    1598 out.go:97] Starting "download-only-085000" primary control-plane node in "download-only-085000" cluster
	I0926 17:13:59.943425    1598 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 17:14:00.007195    1598 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0926 17:14:00.007217    1598 cache.go:56] Caching tarball of preloaded images
	I0926 17:14:00.007405    1598 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 17:14:00.012500    1598 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0926 17:14:00.012507    1598 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0926 17:14:00.094043    1598 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0926 17:14:07.930946    1598 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0926 17:14:07.931123    1598 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0926 17:14:08.628519    1598 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0926 17:14:08.628733    1598 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/download-only-085000/config.json ...
	I0926 17:14:08.628750    1598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/download-only-085000/config.json: {Name:mk4ef8888d5b58bf059454514e2a764f50e81632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:08.629002    1598 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 17:14:08.629194    1598 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0926 17:14:09.452306    1598 out.go:193] 
	W0926 17:14:09.460315    1598 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0] Decompressors:map[bz2:0x140004871d0 gz:0x140004871d8 tar:0x14000487180 tar.bz2:0x14000487190 tar.gz:0x140004871a0 tar.xz:0x140004871b0 tar.zst:0x140004871c0 tbz2:0x14000487190 tgz:0x140004871a0 txz:0x140004871b0 tzst:0x140004871c0 xz:0x140004871e0 zip:0x140004871f0 zst:0x140004871e8] Getters:map[file:0x1400078a6f0 http:0x140001520a0 https:0x14000152230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0926 17:14:09.460342    1598 out_reason.go:110] 
	W0926 17:14:09.470306    1598 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:14:09.474215    1598 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-085000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (19.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.94s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-780000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-780000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.78032875s)

                                                
                                                
-- stdout --
	* [offline-docker-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-780000" primary control-plane node in "offline-docker-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:56:19.694632    3821 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:56:19.694761    3821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:56:19.694765    3821 out.go:358] Setting ErrFile to fd 2...
	I0926 17:56:19.694768    3821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:56:19.694901    3821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:56:19.696083    3821 out.go:352] Setting JSON to false
	I0926 17:56:19.713566    3821 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3342,"bootTime":1727395237,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:56:19.713667    3821 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:56:19.719767    3821 out.go:177] * [offline-docker-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:56:19.727694    3821 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:56:19.727714    3821 notify.go:220] Checking for updates...
	I0926 17:56:19.735556    3821 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:56:19.738594    3821 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:56:19.741639    3821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:56:19.744600    3821 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:56:19.747574    3821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:56:19.750951    3821 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:56:19.751013    3821 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:56:19.754601    3821 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 17:56:19.761647    3821 start.go:297] selected driver: qemu2
	I0926 17:56:19.761668    3821 start.go:901] validating driver "qemu2" against <nil>
	I0926 17:56:19.761681    3821 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:56:19.763740    3821 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:56:19.766521    3821 out.go:177] * Automatically selected the socket_vmnet network
	I0926 17:56:19.769646    3821 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:56:19.769663    3821 cni.go:84] Creating CNI manager for ""
	I0926 17:56:19.769682    3821 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:56:19.769686    3821 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 17:56:19.769719    3821 start.go:340] cluster config:
	{Name:offline-docker-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:56:19.773294    3821 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:56:19.778555    3821 out.go:177] * Starting "offline-docker-780000" primary control-plane node in "offline-docker-780000" cluster
	I0926 17:56:19.782575    3821 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:56:19.782604    3821 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:56:19.782615    3821 cache.go:56] Caching tarball of preloaded images
	I0926 17:56:19.782691    3821 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:56:19.782696    3821 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:56:19.782757    3821 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/offline-docker-780000/config.json ...
	I0926 17:56:19.782768    3821 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/offline-docker-780000/config.json: {Name:mk4eca528f97f2c5e0bd46048c7c62849d8d9acf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:56:19.783081    3821 start.go:360] acquireMachinesLock for offline-docker-780000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:56:19.783115    3821 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "offline-docker-780000"
	I0926 17:56:19.783126    3821 start.go:93] Provisioning new machine with config: &{Name:offline-docker-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:56:19.783167    3821 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:56:19.787581    3821 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 17:56:19.803571    3821 start.go:159] libmachine.API.Create for "offline-docker-780000" (driver="qemu2")
	I0926 17:56:19.803597    3821 client.go:168] LocalClient.Create starting
	I0926 17:56:19.803668    3821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:56:19.803705    3821 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:19.803714    3821 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:19.803762    3821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:56:19.803784    3821 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:19.803799    3821 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:19.804182    3821 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:56:19.963909    3821 main.go:141] libmachine: Creating SSH key...
	I0926 17:56:20.008269    3821 main.go:141] libmachine: Creating Disk image...
	I0926 17:56:20.008276    3821 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:56:20.008454    3821 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2
	I0926 17:56:20.017835    3821 main.go:141] libmachine: STDOUT: 
	I0926 17:56:20.017864    3821 main.go:141] libmachine: STDERR: 
	I0926 17:56:20.017951    3821 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2 +20000M
	I0926 17:56:20.026589    3821 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:56:20.026608    3821 main.go:141] libmachine: STDERR: 
	I0926 17:56:20.026628    3821 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2
	I0926 17:56:20.026633    3821 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:56:20.026646    3821 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:56:20.026675    3821 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:0e:62:6f:87:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2
	I0926 17:56:20.028497    3821 main.go:141] libmachine: STDOUT: 
	I0926 17:56:20.028528    3821 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:56:20.028548    3821 client.go:171] duration metric: took 224.952583ms to LocalClient.Create
	I0926 17:56:22.030589    3821 start.go:128] duration metric: took 2.247470583s to createHost
	I0926 17:56:22.030614    3821 start.go:83] releasing machines lock for "offline-docker-780000", held for 2.247557125s
	W0926 17:56:22.030645    3821 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:22.042156    3821 out.go:177] * Deleting "offline-docker-780000" in qemu2 ...
	W0926 17:56:22.058409    3821 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:22.058419    3821 start.go:729] Will try again in 5 seconds ...
	I0926 17:56:27.060563    3821 start.go:360] acquireMachinesLock for offline-docker-780000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:56:27.061095    3821 start.go:364] duration metric: took 424µs to acquireMachinesLock for "offline-docker-780000"
	I0926 17:56:27.061249    3821 start.go:93] Provisioning new machine with config: &{Name:offline-docker-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:56:27.061538    3821 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:56:27.072240    3821 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 17:56:27.122958    3821 start.go:159] libmachine.API.Create for "offline-docker-780000" (driver="qemu2")
	I0926 17:56:27.123013    3821 client.go:168] LocalClient.Create starting
	I0926 17:56:27.123120    3821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:56:27.123182    3821 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:27.123198    3821 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:27.123262    3821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:56:27.123307    3821 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:27.123321    3821 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:27.124567    3821 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:56:27.303573    3821 main.go:141] libmachine: Creating SSH key...
	I0926 17:56:27.368180    3821 main.go:141] libmachine: Creating Disk image...
	I0926 17:56:27.368192    3821 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:56:27.368407    3821 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2
	I0926 17:56:27.377773    3821 main.go:141] libmachine: STDOUT: 
	I0926 17:56:27.377793    3821 main.go:141] libmachine: STDERR: 
	I0926 17:56:27.377845    3821 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2 +20000M
	I0926 17:56:27.385836    3821 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:56:27.385848    3821 main.go:141] libmachine: STDERR: 
	I0926 17:56:27.385858    3821 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2
	I0926 17:56:27.385861    3821 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:56:27.385874    3821 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:56:27.385906    3821 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:ae:43:e2:64:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/offline-docker-780000/disk.qcow2
	I0926 17:56:27.387413    3821 main.go:141] libmachine: STDOUT: 
	I0926 17:56:27.387428    3821 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:56:27.387440    3821 client.go:171] duration metric: took 264.427792ms to LocalClient.Create
	I0926 17:56:29.389589    3821 start.go:128] duration metric: took 2.328080292s to createHost
	I0926 17:56:29.389649    3821 start.go:83] releasing machines lock for "offline-docker-780000", held for 2.328593875s
	W0926 17:56:29.390033    3821 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:29.408724    3821 out.go:201] 
	W0926 17:56:29.412718    3821 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:56:29.412756    3821 out.go:270] * 
	* 
	W0926 17:56:29.415352    3821 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:56:29.429631    3821 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-780000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-26 17:56:29.446453 -0700 PDT m=+2559.123880459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-780000 -n offline-docker-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-780000 -n offline-docker-780000: exit status 7 (69.428834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-780000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-780000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-780000
--- FAIL: TestOffline (9.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.31675ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-gbgnl" [3e581139-c091-4cb0-9d99-224fdfd570e6] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003587083s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pj8zh" [e4e67464-6eb1-44d1-9d8c-808957ab325e] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007107875s
addons_test.go:338: (dbg) Run:  kubectl --context addons-514000 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-514000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-514000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.058556917s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-514000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 ip
2024/09/26 17:26:26 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-514000 -n addons-514000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-085000 | jenkins | v1.34.0 | 26 Sep 24 17:13 PDT |                     |
	|         | -p download-only-085000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| delete  | -p download-only-085000                                                                     | download-only-085000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| start   | -o=json --download-only                                                                     | download-only-769000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT |                     |
	|         | -p download-only-769000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| delete  | -p download-only-769000                                                                     | download-only-769000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| delete  | -p download-only-085000                                                                     | download-only-085000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| delete  | -p download-only-769000                                                                     | download-only-769000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-534000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT |                     |
	|         | binary-mirror-534000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-534000                                                                     | binary-mirror-534000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| addons  | disable dashboard -p                                                                        | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT |                     |
	|         | addons-514000                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT |                     |
	|         | addons-514000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-514000 --wait=true                                                                | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:16 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-514000 addons disable                                                                | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:17 PDT | 26 Sep 24 17:17 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:25 PDT | 26 Sep 24 17:25 PDT |
	|         | -p addons-514000                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-514000 addons disable                                                                | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:25 PDT | 26 Sep 24 17:25 PDT |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-514000 addons disable                                                                | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:25 PDT | 26 Sep 24 17:25 PDT |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:25 PDT | 26 Sep 24 17:25 PDT |
	|         | -p addons-514000                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-514000 ssh cat                                                                       | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:25 PDT | 26 Sep 24 17:25 PDT |
	|         | /opt/local-path-provisioner/pvc-5c58b83f-e535-4b6e-8a9a-9b3242b1d8cf_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-514000 addons disable                                                                | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:25 PDT | 26 Sep 24 17:26 PDT |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:26 PDT | 26 Sep 24 17:26 PDT |
	|         | addons-514000                                                                               |                      |         |         |                     |                     |
	| addons  | addons-514000 addons                                                                        | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:26 PDT | 26 Sep 24 17:26 PDT |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-514000 ip                                                                            | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:26 PDT | 26 Sep 24 17:26 PDT |
	| addons  | addons-514000 addons disable                                                                | addons-514000        | jenkins | v1.34.0 | 26 Sep 24 17:26 PDT | 26 Sep 24 17:26 PDT |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 17:14:18
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 17:14:18.353462    1677 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:14:18.353611    1677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:14:18.353614    1677 out.go:358] Setting ErrFile to fd 2...
	I0926 17:14:18.353617    1677 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:14:18.353757    1677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:14:18.354780    1677 out.go:352] Setting JSON to false
	I0926 17:14:18.371023    1677 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":821,"bootTime":1727395237,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:14:18.371092    1677 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:14:18.374844    1677 out.go:177] * [addons-514000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:14:18.381753    1677 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:14:18.381788    1677 notify.go:220] Checking for updates...
	I0926 17:14:18.388788    1677 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:14:18.391684    1677 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:14:18.394738    1677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:14:18.397763    1677 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:14:18.400670    1677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:14:18.403900    1677 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:14:18.407743    1677 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 17:14:18.414709    1677 start.go:297] selected driver: qemu2
	I0926 17:14:18.414717    1677 start.go:901] validating driver "qemu2" against <nil>
	I0926 17:14:18.414724    1677 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:14:18.416944    1677 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:14:18.419749    1677 out.go:177] * Automatically selected the socket_vmnet network
	I0926 17:14:18.422735    1677 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:14:18.422753    1677 cni.go:84] Creating CNI manager for ""
	I0926 17:14:18.422776    1677 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:14:18.422780    1677 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 17:14:18.422806    1677 start.go:340] cluster config:
	{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:14:18.426442    1677 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:14:18.434778    1677 out.go:177] * Starting "addons-514000" primary control-plane node in "addons-514000" cluster
	I0926 17:14:18.438674    1677 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:14:18.438688    1677 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:14:18.438699    1677 cache.go:56] Caching tarball of preloaded images
	I0926 17:14:18.438772    1677 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:14:18.438777    1677 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:14:18.438956    1677 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/config.json ...
	I0926 17:14:18.438966    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/config.json: {Name:mk2c7ad39761d48801b944dedb84340b8abf072f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:18.439345    1677 start.go:360] acquireMachinesLock for addons-514000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:14:18.439406    1677 start.go:364] duration metric: took 55.75µs to acquireMachinesLock for "addons-514000"
	I0926 17:14:18.439417    1677 start.go:93] Provisioning new machine with config: &{Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:14:18.439450    1677 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:14:18.443741    1677 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0926 17:14:18.674382    1677 start.go:159] libmachine.API.Create for "addons-514000" (driver="qemu2")
	I0926 17:14:18.674438    1677 client.go:168] LocalClient.Create starting
	I0926 17:14:18.674619    1677 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:14:18.719436    1677 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:14:18.782594    1677 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:14:19.584630    1677 main.go:141] libmachine: Creating SSH key...
	I0926 17:14:19.683563    1677 main.go:141] libmachine: Creating Disk image...
	I0926 17:14:19.683569    1677 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:14:19.683860    1677 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/disk.qcow2
	I0926 17:14:19.702901    1677 main.go:141] libmachine: STDOUT: 
	I0926 17:14:19.702931    1677 main.go:141] libmachine: STDERR: 
	I0926 17:14:19.702995    1677 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/disk.qcow2 +20000M
	I0926 17:14:19.711222    1677 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:14:19.711240    1677 main.go:141] libmachine: STDERR: 
	I0926 17:14:19.711250    1677 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/disk.qcow2
	I0926 17:14:19.711254    1677 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:14:19.711285    1677 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:14:19.711314    1677 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b6:fb:4f:9c:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/disk.qcow2
	I0926 17:14:19.771152    1677 main.go:141] libmachine: STDOUT: 
	I0926 17:14:19.771181    1677 main.go:141] libmachine: STDERR: 
	I0926 17:14:19.771185    1677 main.go:141] libmachine: Attempt 0
	I0926 17:14:19.771199    1677 main.go:141] libmachine: Searching for 46:b6:fb:4f:9c:b6 in /var/db/dhcpd_leases ...
	I0926 17:14:19.771251    1677 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0926 17:14:19.771271    1677 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f74a31}
	I0926 17:14:21.773402    1677 main.go:141] libmachine: Attempt 1
	I0926 17:14:21.773491    1677 main.go:141] libmachine: Searching for 46:b6:fb:4f:9c:b6 in /var/db/dhcpd_leases ...
	I0926 17:14:21.773862    1677 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0926 17:14:21.773911    1677 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f74a31}
	I0926 17:14:23.776152    1677 main.go:141] libmachine: Attempt 2
	I0926 17:14:23.776370    1677 main.go:141] libmachine: Searching for 46:b6:fb:4f:9c:b6 in /var/db/dhcpd_leases ...
	I0926 17:14:23.776692    1677 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0926 17:14:23.776747    1677 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f74a31}
	I0926 17:14:25.778888    1677 main.go:141] libmachine: Attempt 3
	I0926 17:14:25.778916    1677 main.go:141] libmachine: Searching for 46:b6:fb:4f:9c:b6 in /var/db/dhcpd_leases ...
	I0926 17:14:25.778974    1677 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0926 17:14:25.779015    1677 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f74a31}
	I0926 17:14:27.781015    1677 main.go:141] libmachine: Attempt 4
	I0926 17:14:27.781023    1677 main.go:141] libmachine: Searching for 46:b6:fb:4f:9c:b6 in /var/db/dhcpd_leases ...
	I0926 17:14:27.781065    1677 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0926 17:14:27.781070    1677 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f74a31}
	I0926 17:14:29.783072    1677 main.go:141] libmachine: Attempt 5
	I0926 17:14:29.783094    1677 main.go:141] libmachine: Searching for 46:b6:fb:4f:9c:b6 in /var/db/dhcpd_leases ...
	I0926 17:14:29.783121    1677 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0926 17:14:29.783127    1677 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f74a31}
	I0926 17:14:31.784820    1677 main.go:141] libmachine: Attempt 6
	I0926 17:14:31.784844    1677 main.go:141] libmachine: Searching for 46:b6:fb:4f:9c:b6 in /var/db/dhcpd_leases ...
	I0926 17:14:31.784910    1677 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0926 17:14:31.784921    1677 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66f74a31}
	I0926 17:14:33.787005    1677 main.go:141] libmachine: Attempt 7
	I0926 17:14:33.787086    1677 main.go:141] libmachine: Searching for 46:b6:fb:4f:9c:b6 in /var/db/dhcpd_leases ...
	I0926 17:14:33.787542    1677 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0926 17:14:33.787590    1677 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:b6:fb:4f:9c:b6 ID:1,46:b6:fb:4f:9c:b6 Lease:0x66f74a68}
	I0926 17:14:33.787605    1677 main.go:141] libmachine: Found match: 46:b6:fb:4f:9c:b6
	I0926 17:14:33.787641    1677 main.go:141] libmachine: IP: 192.168.105.2
	I0926 17:14:33.787661    1677 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0926 17:14:36.802220    1677 machine.go:93] provisionDockerMachine start ...
	I0926 17:14:36.803455    1677 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:36.803843    1677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009f1c00] 0x1009f4440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0926 17:14:36.803856    1677 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:14:36.881675    1677 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 17:14:36.881710    1677 buildroot.go:166] provisioning hostname "addons-514000"
	I0926 17:14:36.881876    1677 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:36.882181    1677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009f1c00] 0x1009f4440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0926 17:14:36.882194    1677 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-514000 && echo "addons-514000" | sudo tee /etc/hostname
	I0926 17:14:36.952779    1677 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-514000
	
	I0926 17:14:36.952871    1677 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:36.953066    1677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009f1c00] 0x1009f4440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0926 17:14:36.953078    1677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-514000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-514000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-514000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:14:37.011307    1677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:14:37.011322    1677 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1075/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1075/.minikube}
	I0926 17:14:37.011335    1677 buildroot.go:174] setting up certificates
	I0926 17:14:37.011341    1677 provision.go:84] configureAuth start
	I0926 17:14:37.011349    1677 provision.go:143] copyHostCerts
	I0926 17:14:37.011458    1677 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.pem (1078 bytes)
	I0926 17:14:37.011731    1677 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/cert.pem (1123 bytes)
	I0926 17:14:37.011887    1677 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/key.pem (1679 bytes)
	I0926 17:14:37.011999    1677 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem org=jenkins.addons-514000 san=[127.0.0.1 192.168.105.2 addons-514000 localhost minikube]
	I0926 17:14:37.167445    1677 provision.go:177] copyRemoteCerts
	I0926 17:14:37.167519    1677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:14:37.167529    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:14:37.194820    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0926 17:14:37.203314    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 17:14:37.211518    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0926 17:14:37.219550    1677 provision.go:87] duration metric: took 208.195708ms to configureAuth
	I0926 17:14:37.219563    1677 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:14:37.219676    1677 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:14:37.219721    1677 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:37.219808    1677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009f1c00] 0x1009f4440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0926 17:14:37.219813    1677 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:14:37.267979    1677 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:14:37.267985    1677 buildroot.go:70] root file system type: tmpfs
	I0926 17:14:37.268034    1677 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:14:37.268080    1677 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:37.268180    1677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009f1c00] 0x1009f4440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0926 17:14:37.268212    1677 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:14:37.324583    1677 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:14:37.324980    1677 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:37.325138    1677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009f1c00] 0x1009f4440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0926 17:14:37.325146    1677 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:14:38.734484    1677 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 17:14:38.734497    1677 machine.go:96] duration metric: took 1.932304417s to provisionDockerMachine
	I0926 17:14:38.734503    1677 client.go:171] duration metric: took 20.060533292s to LocalClient.Create
	I0926 17:14:38.734516    1677 start.go:167] duration metric: took 20.060614833s to libmachine.API.Create "addons-514000"
	I0926 17:14:38.734522    1677 start.go:293] postStartSetup for "addons-514000" (driver="qemu2")
	I0926 17:14:38.734529    1677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:14:38.734616    1677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:14:38.734626    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:14:38.762986    1677 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:14:38.764723    1677 info.go:137] Remote host: Buildroot 2023.02.9
	I0926 17:14:38.764733    1677 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1075/.minikube/addons for local assets ...
	I0926 17:14:38.764834    1677 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1075/.minikube/files for local assets ...
	I0926 17:14:38.764865    1677 start.go:296] duration metric: took 30.339583ms for postStartSetup
	I0926 17:14:38.765317    1677 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/config.json ...
	I0926 17:14:38.765511    1677 start.go:128] duration metric: took 20.326532s to createHost
	I0926 17:14:38.765551    1677 main.go:141] libmachine: Using SSH client type: native
	I0926 17:14:38.765646    1677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1009f1c00] 0x1009f4440 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0926 17:14:38.765651    1677 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:14:38.815543    1677 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727396079.125601003
	
	I0926 17:14:38.815552    1677 fix.go:216] guest clock: 1727396079.125601003
	I0926 17:14:38.815556    1677 fix.go:229] Guest: 2024-09-26 17:14:39.125601003 -0700 PDT Remote: 2024-09-26 17:14:38.765514 -0700 PDT m=+20.431775543 (delta=360.087003ms)
	I0926 17:14:38.815567    1677 fix.go:200] guest clock delta is within tolerance: 360.087003ms
	I0926 17:14:38.815570    1677 start.go:83] releasing machines lock for "addons-514000", held for 20.376640333s
	I0926 17:14:38.815894    1677 ssh_runner.go:195] Run: cat /version.json
	I0926 17:14:38.815904    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:14:38.815896    1677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:14:38.815937    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:14:38.934114    1677 ssh_runner.go:195] Run: systemctl --version
	I0926 17:14:38.937201    1677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 17:14:38.939917    1677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:14:38.939963    1677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0926 17:14:38.947569    1677 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:14:38.947577    1677 start.go:495] detecting cgroup driver to use...
	I0926 17:14:38.947725    1677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:14:38.955381    1677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0926 17:14:38.959368    1677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:14:38.963279    1677 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:14:38.963312    1677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:14:38.967130    1677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:14:38.970909    1677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:14:38.974645    1677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:14:38.978498    1677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:14:38.982392    1677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:14:38.986289    1677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:14:38.990339    1677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:14:38.994300    1677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:14:38.998216    1677 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0926 17:14:38.998243    1677 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0926 17:14:39.005482    1677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:14:39.009555    1677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:39.101799    1677 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:14:39.113359    1677 start.go:495] detecting cgroup driver to use...
	I0926 17:14:39.113449    1677 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:14:39.120580    1677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:14:39.130867    1677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:14:39.137910    1677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:14:39.143169    1677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:14:39.148687    1677 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 17:14:39.186419    1677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:14:39.192571    1677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:14:39.198805    1677 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:14:39.200206    1677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:14:39.203402    1677 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0926 17:14:39.209556    1677 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:14:39.296307    1677 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:14:39.393608    1677 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:14:39.393667    1677 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:14:39.400047    1677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:39.479051    1677 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:14:41.668785    1677 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.189762s)
	I0926 17:14:41.668850    1677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:14:41.674356    1677 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:14:41.681382    1677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:14:41.686994    1677 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:14:41.776743    1677 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:14:41.859561    1677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:41.942469    1677 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:14:41.949193    1677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:14:41.954225    1677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:42.026525    1677 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:14:42.051765    1677 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:14:42.051866    1677 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:14:42.055284    1677 start.go:563] Will wait 60s for crictl version
	I0926 17:14:42.055333    1677 ssh_runner.go:195] Run: which crictl
	I0926 17:14:42.056800    1677 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:14:42.075453    1677 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0926 17:14:42.075532    1677 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:14:42.088572    1677 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:14:42.109457    1677 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0926 17:14:42.109568    1677 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0926 17:14:42.111055    1677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:14:42.115317    1677 kubeadm.go:883] updating cluster {Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0926 17:14:42.115367    1677 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:14:42.115420    1677 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:14:42.120329    1677 docker.go:685] Got preloaded images: 
	I0926 17:14:42.120337    1677 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0926 17:14:42.120381    1677 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0926 17:14:42.123822    1677 ssh_runner.go:195] Run: which lz4
	I0926 17:14:42.125244    1677 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 17:14:42.126467    1677 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 17:14:42.126478    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0926 17:14:43.370248    1677 docker.go:649] duration metric: took 1.245074959s to copy over tarball
	I0926 17:14:43.370308    1677 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 17:14:44.328552    1677 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 17:14:44.343454    1677 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0926 17:14:44.347114    1677 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0926 17:14:44.353139    1677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:44.444230    1677 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:14:47.167906    1677 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.723722167s)
	I0926 17:14:47.168032    1677 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:14:47.173633    1677 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 17:14:47.173643    1677 cache_images.go:84] Images are preloaded, skipping loading
	I0926 17:14:47.173662    1677 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0926 17:14:47.173723    1677 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-514000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:14:47.173796    1677 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 17:14:47.193615    1677 cni.go:84] Creating CNI manager for ""
	I0926 17:14:47.193629    1677 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:14:47.193635    1677 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 17:14:47.193645    1677 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-514000 NodeName:addons-514000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 17:14:47.193717    1677 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-514000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 17:14:47.193787    1677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0926 17:14:47.197827    1677 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:14:47.197870    1677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 17:14:47.201537    1677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0926 17:14:47.207366    1677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:14:47.213282    1677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0926 17:14:47.219518    1677 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0926 17:14:47.221091    1677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 17:14:47.225624    1677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:14:47.307120    1677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:14:47.317114    1677 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000 for IP: 192.168.105.2
	I0926 17:14:47.317145    1677 certs.go:194] generating shared ca certs ...
	I0926 17:14:47.317154    1677 certs.go:226] acquiring lock for ca certs: {Name:mk27a718ead98149a4ca4d0cc52012d8aa60b9f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:47.317327    1677 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.key
	I0926 17:14:47.602814    1677 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt ...
	I0926 17:14:47.602831    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt: {Name:mkccb1642c64b3674cbf402433f7ae50a1f2ad47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:47.603218    1677 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.key ...
	I0926 17:14:47.603223    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.key: {Name:mk3944d11dbe1ccfcc8f54c4a01726cfa5397a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:47.603382    1677 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.key
	I0926 17:14:47.711411    1677 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.crt ...
	I0926 17:14:47.711419    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.crt: {Name:mk0411b02dced731f161ede01777fd058849dc70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:47.711577    1677 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.key ...
	I0926 17:14:47.711581    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.key: {Name:mkfd3b62aa11f142d583af5254f2fec2efabc0dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:47.711732    1677 certs.go:256] generating profile certs ...
	I0926 17:14:47.711779    1677 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.key
	I0926 17:14:47.711787    1677 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt with IP's: []
	I0926 17:14:47.768315    1677 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt ...
	I0926 17:14:47.768321    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: {Name:mk019cd380b142f2b08316d0fc8dbfcaa5ac0e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:47.768544    1677 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.key ...
	I0926 17:14:47.768548    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.key: {Name:mk6a49cfdc46bc79ed6332c94f65d25bba8fec93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:47.768671    1677 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.key.2d42d0dc
	I0926 17:14:47.768681    1677 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.crt.2d42d0dc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0926 17:14:47.937786    1677 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.crt.2d42d0dc ...
	I0926 17:14:47.937792    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.crt.2d42d0dc: {Name:mk8d0521d03d01bb8645c4b784cc63cf4298657f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:47.938009    1677 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.key.2d42d0dc ...
	I0926 17:14:47.938014    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.key.2d42d0dc: {Name:mk24d5539fd423b1a69878ee80466aae93f93a1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:47.938152    1677 certs.go:381] copying /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.crt.2d42d0dc -> /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.crt
	I0926 17:14:47.938361    1677 certs.go:385] copying /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.key.2d42d0dc -> /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.key
	I0926 17:14:47.938461    1677 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/proxy-client.key
	I0926 17:14:47.938473    1677 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/proxy-client.crt with IP's: []
	I0926 17:14:48.009561    1677 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/proxy-client.crt ...
	I0926 17:14:48.009565    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/proxy-client.crt: {Name:mkf023740737000da041ee824e9672f383f08ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:48.009693    1677 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/proxy-client.key ...
	I0926 17:14:48.009696    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/proxy-client.key: {Name:mkf13a6f873c0a593d0148da0d54e7b4b39a182e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:48.009968    1677 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 17:14:48.009999    1677 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem (1078 bytes)
	I0926 17:14:48.010028    1677 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:14:48.010055    1677 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem (1679 bytes)
	I0926 17:14:48.010523    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:14:48.022318    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:14:48.031029    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:14:48.041075    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 17:14:48.049656    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0926 17:14:48.057995    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:14:48.066381    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:14:48.074532    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:14:48.082590    1677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:14:48.090663    1677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 17:14:48.097384    1677 ssh_runner.go:195] Run: openssl version
	I0926 17:14:48.099681    1677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:14:48.103200    1677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:14:48.104758    1677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:14:48.104780    1677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:14:48.106763    1677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:14:48.110685    1677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:14:48.112100    1677 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0926 17:14:48.112140    1677 kubeadm.go:392] StartCluster: {Name:addons-514000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-514000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:14:48.112218    1677 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 17:14:48.120116    1677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 17:14:48.123545    1677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 17:14:48.126883    1677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 17:14:48.130377    1677 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 17:14:48.130384    1677 kubeadm.go:157] found existing configuration files:
	
	I0926 17:14:48.130407    1677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0926 17:14:48.133803    1677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 17:14:48.133829    1677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 17:14:48.137361    1677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0926 17:14:48.140896    1677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 17:14:48.140926    1677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 17:14:48.144243    1677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0926 17:14:48.147321    1677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 17:14:48.147347    1677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 17:14:48.150606    1677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0926 17:14:48.153942    1677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 17:14:48.153971    1677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 17:14:48.157824    1677 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0926 17:14:48.179874    1677 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0926 17:14:48.179941    1677 kubeadm.go:310] [preflight] Running pre-flight checks
	I0926 17:14:48.220336    1677 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 17:14:48.220395    1677 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 17:14:48.220457    1677 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0926 17:14:48.224579    1677 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 17:14:48.233812    1677 out.go:235]   - Generating certificates and keys ...
	I0926 17:14:48.233845    1677 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0926 17:14:48.233879    1677 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0926 17:14:48.289909    1677 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0926 17:14:48.503236    1677 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0926 17:14:48.769347    1677 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0926 17:14:48.826313    1677 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0926 17:14:48.909434    1677 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0926 17:14:48.909501    1677 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0926 17:14:48.980376    1677 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0926 17:14:48.980444    1677 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-514000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0926 17:14:49.091745    1677 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0926 17:14:49.166551    1677 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0926 17:14:49.341183    1677 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0926 17:14:49.341217    1677 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 17:14:49.578808    1677 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 17:14:49.669320    1677 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0926 17:14:49.828828    1677 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 17:14:49.871254    1677 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 17:14:49.945499    1677 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 17:14:49.945762    1677 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 17:14:49.947805    1677 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 17:14:49.955959    1677 out.go:235]   - Booting up control plane ...
	I0926 17:14:49.956009    1677 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 17:14:49.956057    1677 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 17:14:49.956096    1677 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 17:14:49.956159    1677 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 17:14:49.958477    1677 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 17:14:49.958585    1677 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0926 17:14:50.050764    1677 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0926 17:14:50.050841    1677 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0926 17:14:50.555197    1677 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.22175ms
	I0926 17:14:50.555460    1677 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0926 17:14:53.554891    1677 kubeadm.go:310] [api-check] The API server is healthy after 3.001025668s
	I0926 17:14:53.561185    1677 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 17:14:53.566873    1677 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 17:14:53.573368    1677 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 17:14:53.573468    1677 kubeadm.go:310] [mark-control-plane] Marking the node addons-514000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 17:14:53.577551    1677 kubeadm.go:310] [bootstrap-token] Using token: rl4cmj.om9i8rudw4i9djs8
	I0926 17:14:53.583002    1677 out.go:235]   - Configuring RBAC rules ...
	I0926 17:14:53.583065    1677 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 17:14:53.584027    1677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 17:14:53.587992    1677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 17:14:53.588991    1677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 17:14:53.590344    1677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 17:14:53.591343    1677 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 17:14:53.964647    1677 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 17:14:54.366395    1677 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0926 17:14:54.964160    1677 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0926 17:14:54.965193    1677 kubeadm.go:310] 
	I0926 17:14:54.965292    1677 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0926 17:14:54.965304    1677 kubeadm.go:310] 
	I0926 17:14:54.965450    1677 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0926 17:14:54.965477    1677 kubeadm.go:310] 
	I0926 17:14:54.965548    1677 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0926 17:14:54.965635    1677 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 17:14:54.965793    1677 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 17:14:54.965810    1677 kubeadm.go:310] 
	I0926 17:14:54.965918    1677 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0926 17:14:54.965934    1677 kubeadm.go:310] 
	I0926 17:14:54.966005    1677 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 17:14:54.966015    1677 kubeadm.go:310] 
	I0926 17:14:54.966092    1677 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0926 17:14:54.966233    1677 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 17:14:54.966344    1677 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 17:14:54.966357    1677 kubeadm.go:310] 
	I0926 17:14:54.966489    1677 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 17:14:54.966714    1677 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0926 17:14:54.966730    1677 kubeadm.go:310] 
	I0926 17:14:54.966887    1677 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rl4cmj.om9i8rudw4i9djs8 \
	I0926 17:14:54.967066    1677 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fda44b3178e2a9a18cad0c3f133cc2773c24b77ff2472c5e9e47121699490a5 \
	I0926 17:14:54.967106    1677 kubeadm.go:310] 	--control-plane 
	I0926 17:14:54.967115    1677 kubeadm.go:310] 
	I0926 17:14:54.967249    1677 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0926 17:14:54.967270    1677 kubeadm.go:310] 
	I0926 17:14:54.967379    1677 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rl4cmj.om9i8rudw4i9djs8 \
	I0926 17:14:54.967551    1677 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fda44b3178e2a9a18cad0c3f133cc2773c24b77ff2472c5e9e47121699490a5 
	I0926 17:14:54.968045    1677 kubeadm.go:310] W0927 00:14:48.488820    1585 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 17:14:54.968558    1677 kubeadm.go:310] W0927 00:14:48.489248    1585 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0926 17:14:54.968752    1677 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 17:14:54.968777    1677 cni.go:84] Creating CNI manager for ""
	I0926 17:14:54.968800    1677 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:14:54.972059    1677 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 17:14:54.977983    1677 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 17:14:54.988169    1677 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 17:14:55.000471    1677 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 17:14:55.000587    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:14:55.000588    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-514000 minikube.k8s.io/updated_at=2024_09_26T17_14_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-514000 minikube.k8s.io/primary=true
	I0926 17:14:55.074672    1677 ops.go:34] apiserver oom_adj: -16
	I0926 17:14:55.074785    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:14:55.576894    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:14:56.077068    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:14:56.576912    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:14:57.076369    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:14:57.576867    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:14:58.076950    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:14:58.575852    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:14:59.076866    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:14:59.576377    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:00.076844    1677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 17:15:00.116436    1677 kubeadm.go:1113] duration metric: took 5.116064458s to wait for elevateKubeSystemPrivileges
	I0926 17:15:00.116452    1677 kubeadm.go:394] duration metric: took 12.004596208s to StartCluster
	I0926 17:15:00.116463    1677 settings.go:142] acquiring lock: {Name:mk68436efc4e8fe170d744b4cebdb7ddef61f64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:15:00.116621    1677 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:15:00.116816    1677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/kubeconfig: {Name:mk9560fb3377d007cf139de436457ca7aa0f8d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:15:00.117028    1677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0926 17:15:00.117060    1677 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:15:00.117068    1677 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0926 17:15:00.117113    1677 addons.go:69] Setting yakd=true in profile "addons-514000"
	I0926 17:15:00.117121    1677 addons.go:234] Setting addon yakd=true in "addons-514000"
	I0926 17:15:00.117135    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.117134    1677 addons.go:69] Setting default-storageclass=true in profile "addons-514000"
	I0926 17:15:00.117143    1677 addons.go:69] Setting inspektor-gadget=true in profile "addons-514000"
	I0926 17:15:00.117156    1677 addons.go:69] Setting storage-provisioner=true in profile "addons-514000"
	I0926 17:15:00.117160    1677 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:15:00.117165    1677 addons.go:69] Setting metrics-server=true in profile "addons-514000"
	I0926 17:15:00.117167    1677 addons.go:234] Setting addon inspektor-gadget=true in "addons-514000"
	I0926 17:15:00.117175    1677 addons.go:69] Setting ingress=true in profile "addons-514000"
	I0926 17:15:00.117174    1677 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-514000"
	I0926 17:15:00.117181    1677 addons.go:234] Setting addon ingress=true in "addons-514000"
	I0926 17:15:00.117186    1677 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-514000"
	I0926 17:15:00.117162    1677 addons.go:234] Setting addon storage-provisioner=true in "addons-514000"
	I0926 17:15:00.117196    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.117202    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.117208    1677 addons.go:69] Setting volcano=true in profile "addons-514000"
	I0926 17:15:00.117213    1677 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-514000"
	I0926 17:15:00.117217    1677 addons.go:69] Setting volumesnapshots=true in profile "addons-514000"
	I0926 17:15:00.117217    1677 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-514000"
	I0926 17:15:00.117208    1677 addons.go:69] Setting ingress-dns=true in profile "addons-514000"
	I0926 17:15:00.117267    1677 addons.go:234] Setting addon ingress-dns=true in "addons-514000"
	I0926 17:15:00.117277    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.117204    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.117169    1677 addons.go:234] Setting addon metrics-server=true in "addons-514000"
	I0926 17:15:00.117496    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.117574    1677 retry.go:31] will retry after 1.211200755s: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.117213    1677 addons.go:234] Setting addon volcano=true in "addons-514000"
	I0926 17:15:00.117615    1677 retry.go:31] will retry after 1.132811786s: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.117635    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.117152    1677 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-514000"
	I0926 17:15:00.117670    1677 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-514000"
	I0926 17:15:00.117693    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.117765    1677 retry.go:31] will retry after 845.189815ms: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.117172    1677 addons.go:69] Setting registry=true in profile "addons-514000"
	I0926 17:15:00.117867    1677 retry.go:31] will retry after 712.499922ms: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.117886    1677 addons.go:234] Setting addon registry=true in "addons-514000"
	I0926 17:15:00.117905    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.117204    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.117942    1677 retry.go:31] will retry after 1.07688315s: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.117950    1677 retry.go:31] will retry after 1.455401845s: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.117150    1677 addons.go:69] Setting cloud-spanner=true in profile "addons-514000"
	I0926 17:15:00.117971    1677 addons.go:234] Setting addon cloud-spanner=true in "addons-514000"
	I0926 17:15:00.117205    1677 addons.go:69] Setting gcp-auth=true in profile "addons-514000"
	I0926 17:15:00.118004    1677 mustload.go:65] Loading cluster: addons-514000
	I0926 17:15:00.117147    1677 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-514000"
	I0926 17:15:00.118024    1677 retry.go:31] will retry after 734.615011ms: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.117221    1677 addons.go:234] Setting addon volumesnapshots=true in "addons-514000"
	I0926 17:15:00.118092    1677 config.go:182] Loaded profile config "addons-514000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:15:00.118112    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.118148    1677 retry.go:31] will retry after 849.289314ms: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.118150    1677 retry.go:31] will retry after 1.291857837s: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.117989    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.118214    1677 retry.go:31] will retry after 1.194108618s: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.118366    1677 retry.go:31] will retry after 556.678843ms: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.118368    1677 retry.go:31] will retry after 990.18805ms: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.118407    1677 retry.go:31] will retry after 756.108645ms: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/monitor: connect: connection refused
	I0926 17:15:00.121289    1677 out.go:177] * Verifying Kubernetes components...
	I0926 17:15:00.129342    1677 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0926 17:15:00.129342    1677 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0926 17:15:00.133364    1677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:15:00.136332    1677 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0926 17:15:00.136338    1677 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0926 17:15:00.136346    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:00.139382    1677 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 17:15:00.139390    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0926 17:15:00.139396    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:00.239830    1677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0926 17:15:00.258157    1677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:15:00.324154    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0926 17:15:00.326050    1677 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0926 17:15:00.326058    1677 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0926 17:15:00.347849    1677 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0926 17:15:00.347859    1677 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0926 17:15:00.366212    1677 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0926 17:15:00.366224    1677 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0926 17:15:00.374988    1677 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0926 17:15:00.374996    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0926 17:15:00.378956    1677 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0926 17:15:00.380281    1677 node_ready.go:35] waiting up to 6m0s for node "addons-514000" to be "Ready" ...
	I0926 17:15:00.385752    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0926 17:15:00.386055    1677 node_ready.go:49] node "addons-514000" has status "Ready":"True"
	I0926 17:15:00.386074    1677 node_ready.go:38] duration metric: took 5.770709ms for node "addons-514000" to be "Ready" ...
	I0926 17:15:00.386081    1677 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:15:00.399324    1677 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-b9j5k" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:00.678613    1677 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-514000 service yakd-dashboard -n yakd-dashboard
	
	I0926 17:15:00.678620    1677 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 17:15:00.681622    1677 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 17:15:00.681630    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 17:15:00.681639    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:00.728181    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 17:15:00.899477    1677 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0926 17:15:00.905580    1677 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0926 17:15:00.905590    1677 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0926 17:15:00.905598    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:00.911516    1677 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0926 17:15:00.915535    1677 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0926 17:15:00.917048    1677 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-514000" context rescaled to 1 replicas
	I0926 17:15:00.919511    1677 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0926 17:15:00.919519    1677 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0926 17:15:00.919528    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:00.923897    1677 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0926 17:15:00.930544    1677 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0926 17:15:00.935013    1677 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 17:15:00.935023    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0926 17:15:00.935035    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:00.966570    1677 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0926 17:15:00.969559    1677 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0926 17:15:00.969571    1677 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0926 17:15:00.969584    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:00.970730    1677 addons.go:234] Setting addon default-storageclass=true in "addons-514000"
	I0926 17:15:00.970747    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:00.971342    1677 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 17:15:00.971347    1677 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 17:15:00.971352    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:00.979871    1677 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0926 17:15:00.979883    1677 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0926 17:15:01.028229    1677 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0926 17:15:01.028242    1677 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0926 17:15:01.036666    1677 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0926 17:15:01.036678    1677 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0926 17:15:01.037781    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0926 17:15:01.049467    1677 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0926 17:15:01.049479    1677 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0926 17:15:01.052881    1677 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0926 17:15:01.052892    1677 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0926 17:15:01.062927    1677 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0926 17:15:01.062939    1677 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0926 17:15:01.067071    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 17:15:01.079081    1677 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0926 17:15:01.079091    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0926 17:15:01.087392    1677 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0926 17:15:01.087405    1677 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0926 17:15:01.107281    1677 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0926 17:15:01.107295    1677 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0926 17:15:01.113375    1677 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0926 17:15:01.117400    1677 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0926 17:15:01.117410    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0926 17:15:01.117433    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:01.121921    1677 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0926 17:15:01.121932    1677 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0926 17:15:01.131575    1677 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0926 17:15:01.131590    1677 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0926 17:15:01.157133    1677 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0926 17:15:01.157143    1677 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0926 17:15:01.167258    1677 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0926 17:15:01.167269    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0926 17:15:01.182286    1677 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 17:15:01.182295    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0926 17:15:01.189834    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0926 17:15:01.193049    1677 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 17:15:01.193060    1677 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0926 17:15:01.198348    1677 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0926 17:15:01.202334    1677 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0926 17:15:01.212732    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0926 17:15:01.214393    1677 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0926 17:15:01.218478    1677 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 17:15:01.218492    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0926 17:15:01.218505    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:01.225528    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0926 17:15:01.239404    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 17:15:01.253403    1677 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-514000"
	I0926 17:15:01.253426    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:01.257222    1677 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0926 17:15:01.261345    1677 out.go:177]   - Using image docker.io/busybox:stable
	I0926 17:15:01.265395    1677 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 17:15:01.265403    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0926 17:15:01.265413    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:01.313718    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:01.334320    1677 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0926 17:15:01.340351    1677 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 17:15:01.340364    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0926 17:15:01.340408    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:01.414794    1677 out.go:177]   - Using image docker.io/registry:2.8.3
	I0926 17:15:01.417826    1677 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0926 17:15:01.421820    1677 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0926 17:15:01.421829    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0926 17:15:01.421840    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:01.470865    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0926 17:15:01.500960    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0926 17:15:01.577684    1677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0926 17:15:01.581799    1677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0926 17:15:01.585821    1677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0926 17:15:01.589729    1677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0926 17:15:01.597757    1677 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0926 17:15:01.606750    1677 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0926 17:15:01.616780    1677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0926 17:15:01.620593    1677 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0926 17:15:01.624709    1677 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0926 17:15:01.624721    1677 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0926 17:15:01.624731    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:01.636605    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0926 17:15:01.737176    1677 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0926 17:15:01.737191    1677 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0926 17:15:01.840830    1677 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0926 17:15:01.840843    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0926 17:15:02.009424    1677 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0926 17:15:02.009437    1677 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0926 17:15:02.031193    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0926 17:15:02.165564    1677 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0926 17:15:02.165578    1677 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0926 17:15:02.334126    1677 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0926 17:15:02.334140    1677 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0926 17:15:02.404701    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-b9j5k" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:02.407203    1677 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0926 17:15:02.407213    1677 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0926 17:15:02.437233    1677 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0926 17:15:02.437247    1677 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0926 17:15:02.486420    1677 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0926 17:15:02.486431    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0926 17:15:02.629570    1677 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0926 17:15:02.629583    1677 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0926 17:15:02.789434    1677 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0926 17:15:02.789447    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0926 17:15:02.898139    1677 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0926 17:15:02.898150    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0926 17:15:03.022460    1677 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 17:15:03.022475    1677 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0926 17:15:03.102762    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0926 17:15:04.419080    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-b9j5k" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:05.083852    1677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (4.046152167s)
	I0926 17:15:05.083900    1677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.016915292s)
	I0926 17:15:05.083948    1677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.894188916s)
	I0926 17:15:05.083975    1677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.871322625s)
	I0926 17:15:05.083996    1677 addons.go:475] Verifying addon metrics-server=true in "addons-514000"
	I0926 17:15:05.084009    1677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.858564791s)
	I0926 17:15:05.084044    1677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.844719458s)
	W0926 17:15:05.084054    1677 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 17:15:05.084067    1677 retry.go:31] will retry after 353.913056ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0926 17:15:05.084100    1677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.613311292s)
	I0926 17:15:05.084108    1677 addons.go:475] Verifying addon ingress=true in "addons-514000"
	I0926 17:15:05.084139    1677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.583250875s)
	I0926 17:15:05.084196    1677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.447651625s)
	I0926 17:15:05.084295    1677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.053162125s)
	I0926 17:15:05.084304    1677 addons.go:475] Verifying addon registry=true in "addons-514000"
	I0926 17:15:05.089604    1677 out.go:177] * Verifying ingress addon...
	I0926 17:15:05.092631    1677 out.go:177] * Verifying registry addon...
	I0926 17:15:05.105291    1677 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0926 17:15:05.112051    1677 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0926 17:15:05.176975    1677 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0926 17:15:05.176985    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:05.177118    1677 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0926 17:15:05.177124    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0926 17:15:05.198232    1677 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0926 17:15:05.440164    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0926 17:15:05.588307    1677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.48558225s)
	I0926 17:15:05.588327    1677 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-514000"
	I0926 17:15:05.593605    1677 out.go:177] * Verifying csi-hostpath-driver addon...
	I0926 17:15:05.602064    1677 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0926 17:15:05.614888    1677 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0926 17:15:05.614898    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:05.625498    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:05.627545    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:06.106870    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:06.108818    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:06.209127    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:06.607232    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:06.607752    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:06.614610    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:06.904104    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-b9j5k" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:07.108736    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:07.108969    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:07.114552    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:07.606673    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:07.607415    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:07.614618    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:08.106458    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:08.107629    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:08.114609    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:08.606468    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:08.607652    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:08.614588    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:08.914434    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-b9j5k" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:09.109297    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:09.109638    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:09.115173    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:09.120107    1677 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0926 17:15:09.120130    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:09.161487    1677 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0926 17:15:09.168784    1677 addons.go:234] Setting addon gcp-auth=true in "addons-514000"
	I0926 17:15:09.168806    1677 host.go:66] Checking if "addons-514000" exists ...
	I0926 17:15:09.169575    1677 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0926 17:15:09.169582    1677 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/addons-514000/id_rsa Username:docker}
	I0926 17:15:09.202520    1677 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0926 17:15:09.206499    1677 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0926 17:15:09.211349    1677 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0926 17:15:09.211355    1677 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0926 17:15:09.217386    1677 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0926 17:15:09.217396    1677 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0926 17:15:09.223652    1677 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 17:15:09.223660    1677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0926 17:15:09.229813    1677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0926 17:15:09.444182    1677 addons.go:475] Verifying addon gcp-auth=true in "addons-514000"
	I0926 17:15:09.449958    1677 out.go:177] * Verifying gcp-auth addon...
	I0926 17:15:09.456385    1677 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0926 17:15:09.457518    1677 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 17:15:09.608128    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:09.608133    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:09.709478    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:10.111441    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:10.111484    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:10.113451    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:10.606416    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:10.607146    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:10.614484    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:11.106661    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:11.107160    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:11.114679    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:11.403640    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-b9j5k" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:11.606305    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:11.607318    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:11.614493    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:11.903187    1677 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-b9j5k" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-b9j5k" not found
	I0926 17:15:11.903202    1677 pod_ready.go:82] duration metric: took 11.504134875s for pod "coredns-7c65d6cfc9-b9j5k" in "kube-system" namespace to be "Ready" ...
	E0926 17:15:11.903211    1677 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-b9j5k" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-b9j5k" not found
	I0926 17:15:11.903216    1677 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:12.108077    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:12.108954    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:12.115051    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:12.606389    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:12.607068    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:12.614546    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:13.106346    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:13.107169    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:13.114406    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:13.606206    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:13.607143    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:13.614660    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:13.907600    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:14.106375    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:14.107255    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:14.114483    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:14.606242    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:14.607054    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:14.614489    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:15.106203    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:15.107105    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:15.114510    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:15.622348    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:15.622459    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:15.626047    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:15.911271    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:16.110797    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:16.111020    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:16.115296    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:16.607731    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:16.610431    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:16.614159    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:17.106059    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:17.106617    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:17.114455    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:17.605790    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:17.606558    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:17.614849    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:18.105996    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:18.106716    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:18.114382    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:18.409671    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:18.608595    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:18.608782    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:18.613874    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:19.106184    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:19.106973    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:19.114207    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:19.606142    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:19.606995    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:19.614562    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:20.105996    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:20.106842    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:20.114165    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:20.604691    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:20.607390    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:20.614760    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:20.907786    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:21.105969    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:21.106697    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:21.114219    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:21.606446    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:21.606930    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:21.614471    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:22.106064    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:22.106892    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:22.114151    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:22.606174    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:22.606988    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:22.614281    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:23.106815    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:23.107708    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:23.114108    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:23.407347    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:23.606067    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:23.606677    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:23.614316    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:24.105864    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:24.106583    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:24.114273    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:24.606040    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:24.606834    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:24.614183    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:25.106061    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:25.107136    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:25.113986    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:25.407830    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:25.606201    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:25.606772    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:25.614269    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:26.105934    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:26.106915    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:26.114233    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:26.605974    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:26.606746    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:26.614105    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:27.110366    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:27.110501    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:27.209891    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:27.407825    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:27.608203    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:27.609808    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:27.614165    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:28.109754    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:28.110984    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:28.115471    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:28.607420    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:28.639421    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:28.639458    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:29.105948    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:29.106838    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:29.114196    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:29.407953    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:29.605748    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:29.606510    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:29.614344    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:30.106226    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:30.106660    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:30.114254    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:30.606065    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:30.606841    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:30.613947    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:31.107402    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:31.108960    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:31.114072    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:31.606611    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:31.607989    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:31.614198    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:31.907990    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:32.106799    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:32.108052    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:32.114225    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:32.607716    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:32.609618    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:32.613155    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:33.105684    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:33.106219    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:33.114003    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:33.605744    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:33.606455    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:33.614057    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:34.128509    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:34.128741    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:34.128768    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:34.407634    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:34.605838    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:34.606467    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:34.614049    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:35.105889    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:35.106770    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:35.113888    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:35.605170    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:35.606037    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:35.614414    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:36.104788    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:36.106457    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:36.113977    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:36.407750    1677 pod_ready.go:103] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"False"
	I0926 17:15:36.605925    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:36.606737    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:36.614113    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:37.105186    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:37.105995    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:37.113862    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:37.605273    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:37.606429    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:37.614018    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:37.907644    1677 pod_ready.go:93] pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:37.907653    1677 pod_ready.go:82] duration metric: took 26.005046834s for pod "coredns-7c65d6cfc9-cg8dv" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:37.907659    1677 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:37.909709    1677 pod_ready.go:93] pod "etcd-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:37.909715    1677 pod_ready.go:82] duration metric: took 2.052458ms for pod "etcd-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:37.909719    1677 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:37.911667    1677 pod_ready.go:93] pod "kube-apiserver-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:37.911673    1677 pod_ready.go:82] duration metric: took 1.950833ms for pod "kube-apiserver-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:37.911677    1677 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:37.913648    1677 pod_ready.go:93] pod "kube-controller-manager-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:37.913653    1677 pod_ready.go:82] duration metric: took 1.973834ms for pod "kube-controller-manager-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:37.913657    1677 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qc8gh" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:37.916746    1677 pod_ready.go:93] pod "kube-proxy-qc8gh" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:37.916753    1677 pod_ready.go:82] duration metric: took 3.092667ms for pod "kube-proxy-qc8gh" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:37.916756    1677 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:38.105642    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:38.106824    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:38.116110    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:38.307618    1677 pod_ready.go:93] pod "kube-scheduler-addons-514000" in "kube-system" namespace has status "Ready":"True"
	I0926 17:15:38.307628    1677 pod_ready.go:82] duration metric: took 390.877458ms for pod "kube-scheduler-addons-514000" in "kube-system" namespace to be "Ready" ...
	I0926 17:15:38.307631    1677 pod_ready.go:39] duration metric: took 37.92244s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0926 17:15:38.307642    1677 api_server.go:52] waiting for apiserver process to appear ...
	I0926 17:15:38.307716    1677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:15:38.314572    1677 api_server.go:72] duration metric: took 38.198403458s to wait for apiserver process to appear ...
	I0926 17:15:38.314581    1677 api_server.go:88] waiting for apiserver healthz status ...
	I0926 17:15:38.314590    1677 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0926 17:15:38.317231    1677 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0926 17:15:38.317775    1677 api_server.go:141] control plane version: v1.31.1
	I0926 17:15:38.317781    1677 api_server.go:131] duration metric: took 3.197208ms to wait for apiserver health ...
	I0926 17:15:38.317784    1677 system_pods.go:43] waiting for kube-system pods to appear ...
	I0926 17:15:38.512318    1677 system_pods.go:59] 17 kube-system pods found
	I0926 17:15:38.512337    1677 system_pods.go:61] "coredns-7c65d6cfc9-cg8dv" [d514c6ac-9cc5-43ce-a0cf-07f58e60b73f] Running
	I0926 17:15:38.512341    1677 system_pods.go:61] "csi-hostpath-attacher-0" [bab0ae24-80e8-4e97-8835-46b91c1b8fb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 17:15:38.512345    1677 system_pods.go:61] "csi-hostpath-resizer-0" [8ce9ff7d-605f-4300-a484-e914b5d01bbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 17:15:38.512348    1677 system_pods.go:61] "csi-hostpathplugin-4w5vc" [cd39f615-2a60-4928-a8b2-3658aba431dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 17:15:38.512351    1677 system_pods.go:61] "etcd-addons-514000" [10e61b64-5108-4b93-9e04-0bf28223a5ce] Running
	I0926 17:15:38.512357    1677 system_pods.go:61] "kube-apiserver-addons-514000" [f40a2098-75bb-4c02-8278-4fbf597446ff] Running
	I0926 17:15:38.512359    1677 system_pods.go:61] "kube-controller-manager-addons-514000" [1fea223d-3f1c-4acf-a34f-cb19f75f9511] Running
	I0926 17:15:38.512361    1677 system_pods.go:61] "kube-ingress-dns-minikube" [ba055693-f3e6-4a16-bf5d-e70f17e50a5c] Running
	I0926 17:15:38.512363    1677 system_pods.go:61] "kube-proxy-qc8gh" [139cde95-7d75-485f-9b47-e89668d84926] Running
	I0926 17:15:38.512365    1677 system_pods.go:61] "kube-scheduler-addons-514000" [d8f3bb19-ff1e-4a1f-a58a-0b01b1729029] Running
	I0926 17:15:38.512366    1677 system_pods.go:61] "metrics-server-84c5f94fbc-lp77z" [a135456e-4dc7-40b1-8fef-cd0581a32c60] Running
	I0926 17:15:38.512369    1677 system_pods.go:61] "nvidia-device-plugin-daemonset-ggs9h" [1bbbb61c-d5bc-49d8-9d69-003bf5aac935] Running
	I0926 17:15:38.512371    1677 system_pods.go:61] "registry-66c9cd494c-gbgnl" [3e581139-c091-4cb0-9d99-224fdfd570e6] Running
	I0926 17:15:38.512373    1677 system_pods.go:61] "registry-proxy-pj8zh" [e4e67464-6eb1-44d1-9d8c-808957ab325e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 17:15:38.512375    1677 system_pods.go:61] "snapshot-controller-56fcc65765-bvnnq" [1f914535-d7d2-44aa-a674-c9de2e356e77] Running
	I0926 17:15:38.512377    1677 system_pods.go:61] "snapshot-controller-56fcc65765-w9p5p" [9651cba5-88b7-4188-afe2-e3beeca01159] Running
	I0926 17:15:38.512379    1677 system_pods.go:61] "storage-provisioner" [64eec818-473b-4abd-a483-72bf5830f772] Running
	I0926 17:15:38.512382    1677 system_pods.go:74] duration metric: took 194.599958ms to wait for pod list to return data ...
	I0926 17:15:38.512387    1677 default_sa.go:34] waiting for default service account to be created ...
	I0926 17:15:38.605930    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:38.606680    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:38.614437    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:38.707783    1677 default_sa.go:45] found service account: "default"
	I0926 17:15:38.707794    1677 default_sa.go:55] duration metric: took 195.408459ms for default service account to be created ...
	I0926 17:15:38.707797    1677 system_pods.go:116] waiting for k8s-apps to be running ...
	I0926 17:15:38.911346    1677 system_pods.go:86] 17 kube-system pods found
	I0926 17:15:38.911357    1677 system_pods.go:89] "coredns-7c65d6cfc9-cg8dv" [d514c6ac-9cc5-43ce-a0cf-07f58e60b73f] Running
	I0926 17:15:38.911362    1677 system_pods.go:89] "csi-hostpath-attacher-0" [bab0ae24-80e8-4e97-8835-46b91c1b8fb4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0926 17:15:38.911365    1677 system_pods.go:89] "csi-hostpath-resizer-0" [8ce9ff7d-605f-4300-a484-e914b5d01bbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0926 17:15:38.911369    1677 system_pods.go:89] "csi-hostpathplugin-4w5vc" [cd39f615-2a60-4928-a8b2-3658aba431dd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0926 17:15:38.911372    1677 system_pods.go:89] "etcd-addons-514000" [10e61b64-5108-4b93-9e04-0bf28223a5ce] Running
	I0926 17:15:38.911374    1677 system_pods.go:89] "kube-apiserver-addons-514000" [f40a2098-75bb-4c02-8278-4fbf597446ff] Running
	I0926 17:15:38.911376    1677 system_pods.go:89] "kube-controller-manager-addons-514000" [1fea223d-3f1c-4acf-a34f-cb19f75f9511] Running
	I0926 17:15:38.911378    1677 system_pods.go:89] "kube-ingress-dns-minikube" [ba055693-f3e6-4a16-bf5d-e70f17e50a5c] Running
	I0926 17:15:38.911379    1677 system_pods.go:89] "kube-proxy-qc8gh" [139cde95-7d75-485f-9b47-e89668d84926] Running
	I0926 17:15:38.911381    1677 system_pods.go:89] "kube-scheduler-addons-514000" [d8f3bb19-ff1e-4a1f-a58a-0b01b1729029] Running
	I0926 17:15:38.911383    1677 system_pods.go:89] "metrics-server-84c5f94fbc-lp77z" [a135456e-4dc7-40b1-8fef-cd0581a32c60] Running
	I0926 17:15:38.911385    1677 system_pods.go:89] "nvidia-device-plugin-daemonset-ggs9h" [1bbbb61c-d5bc-49d8-9d69-003bf5aac935] Running
	I0926 17:15:38.911387    1677 system_pods.go:89] "registry-66c9cd494c-gbgnl" [3e581139-c091-4cb0-9d99-224fdfd570e6] Running
	I0926 17:15:38.911389    1677 system_pods.go:89] "registry-proxy-pj8zh" [e4e67464-6eb1-44d1-9d8c-808957ab325e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0926 17:15:38.911391    1677 system_pods.go:89] "snapshot-controller-56fcc65765-bvnnq" [1f914535-d7d2-44aa-a674-c9de2e356e77] Running
	I0926 17:15:38.911392    1677 system_pods.go:89] "snapshot-controller-56fcc65765-w9p5p" [9651cba5-88b7-4188-afe2-e3beeca01159] Running
	I0926 17:15:38.911394    1677 system_pods.go:89] "storage-provisioner" [64eec818-473b-4abd-a483-72bf5830f772] Running
	I0926 17:15:38.911397    1677 system_pods.go:126] duration metric: took 203.601833ms to wait for k8s-apps to be running ...
	I0926 17:15:38.911400    1677 system_svc.go:44] waiting for kubelet service to be running ....
	I0926 17:15:38.911460    1677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 17:15:38.918012    1677 system_svc.go:56] duration metric: took 6.609792ms WaitForService to wait for kubelet
	I0926 17:15:38.918021    1677 kubeadm.go:582] duration metric: took 38.801867791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:15:38.918030    1677 node_conditions.go:102] verifying NodePressure condition ...
	I0926 17:15:39.106178    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:39.107098    1677 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0926 17:15:39.107109    1677 node_conditions.go:123] node cpu capacity is 2
	I0926 17:15:39.107117    1677 node_conditions.go:105] duration metric: took 189.087334ms to run NodePressure ...
	I0926 17:15:39.107128    1677 start.go:241] waiting for startup goroutines ...
	I0926 17:15:39.107591    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:39.114862    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:39.609551    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:39.609847    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:39.615647    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:40.105337    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:40.106128    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:40.113839    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:40.605631    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:40.607816    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:40.613311    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:41.107267    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:41.108486    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:41.114991    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:41.605783    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:41.606688    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:41.613735    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0926 17:15:42.107202    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:42.108401    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:42.114236    1677 kapi.go:107] duration metric: took 37.003059417s to wait for kubernetes.io/minikube-addons=registry ...
	I0926 17:15:42.607403    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:42.608797    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:43.106544    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:43.107757    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:43.605506    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:43.606361    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:44.105471    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:44.106420    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:44.605981    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:44.606713    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:45.105262    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:45.106138    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:45.605550    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:45.606590    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:46.105462    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:46.106310    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:46.605639    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:46.606495    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:47.105518    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:47.106281    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:47.608453    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:47.611812    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:48.105360    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:48.106221    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:48.605568    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:48.606170    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:49.105056    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:49.106089    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:49.605393    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:49.606188    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:50.107857    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:50.107954    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:50.605352    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:50.606237    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:51.103761    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:51.105933    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:51.604330    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:51.607008    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:52.105049    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:52.105906    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:52.605393    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:52.606427    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:53.105259    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:53.105828    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:53.605870    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:53.607212    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:54.105158    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:54.105959    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:54.605095    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:54.605885    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:55.112947    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:55.113078    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:55.605305    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:55.606270    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:56.105391    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:56.106240    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:56.605252    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:56.606223    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:57.105109    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:57.106484    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:57.605255    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:57.605992    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:58.103450    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:58.105822    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:58.607028    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:58.607077    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:59.105792    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:59.106884    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:15:59.605258    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:15:59.606026    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:00.105768    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:00.106743    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:00.605113    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:00.605802    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:01.104964    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:01.105814    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:01.605323    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:01.606093    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:02.104831    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:02.105422    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:02.607199    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:02.607480    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:03.107543    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:03.107753    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:03.605048    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:03.605999    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:04.105255    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0926 17:16:04.106617    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:04.605212    1677 kapi.go:107] duration metric: took 59.004542s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0926 17:16:04.605864    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:05.108148    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:05.609297    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:06.110914    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:06.608097    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:07.107964    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:07.613602    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:08.109426    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:08.610204    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:09.108664    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:09.608208    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:10.106753    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:10.607028    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:11.107529    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:11.606653    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:12.106090    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:12.606338    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:13.106133    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:13.606281    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:14.105546    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:14.605663    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:15.106971    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:15.605359    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:16.105222    1677 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0926 17:16:16.605611    1677 kapi.go:107] duration metric: took 1m11.504601333s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0926 17:16:31.953608    1677 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0926 17:16:31.953616    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:16:32.455480    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:16:32.953281    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:16:33.453252    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:16:33.953545    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:16:34.453340    1677 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0926 17:16:34.953095    1677 kapi.go:107] duration metric: took 1m25.503590958s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0926 17:16:34.958075    1677 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-514000 cluster.
	I0926 17:16:34.961028    1677 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0926 17:16:34.963985    1677 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0926 17:16:34.969011    1677 out.go:177] * Enabled addons: nvidia-device-plugin, yakd, storage-provisioner, volcano, inspektor-gadget, metrics-server, cloud-spanner, ingress-dns, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0926 17:16:34.972929    1677 addons.go:510] duration metric: took 1m34.862963625s for enable addons: enabled=[nvidia-device-plugin yakd storage-provisioner volcano inspektor-gadget metrics-server cloud-spanner ingress-dns default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0926 17:16:34.972947    1677 start.go:246] waiting for cluster config update ...
	I0926 17:16:34.972960    1677 start.go:255] writing updated cluster config ...
	I0926 17:16:34.973363    1677 ssh_runner.go:195] Run: rm -f paused
	I0926 17:16:35.126506    1677 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0926 17:16:35.131010    1677 out.go:201] 
	W0926 17:16:35.134120    1677 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0926 17:16:35.138003    1677 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0926 17:16:35.144991    1677 out.go:177] * Done! kubectl is now configured to use "addons-514000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 27 00:26:11 addons-514000 dockerd[1270]: time="2024-09-27T00:26:11.546323268Z" level=info msg="ignoring event" container=e357b21695bfef7eca5183724d7ca87f0da5a441dc3d752ef2c1f08cbf635a09 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:26:11 addons-514000 dockerd[1277]: time="2024-09-27T00:26:11.559847777Z" level=info msg="shim disconnected" id=e357b21695bfef7eca5183724d7ca87f0da5a441dc3d752ef2c1f08cbf635a09 namespace=moby
	Sep 27 00:26:11 addons-514000 dockerd[1277]: time="2024-09-27T00:26:11.559894371Z" level=warning msg="cleaning up after shim disconnected" id=e357b21695bfef7eca5183724d7ca87f0da5a441dc3d752ef2c1f08cbf635a09 namespace=moby
	Sep 27 00:26:11 addons-514000 dockerd[1277]: time="2024-09-27T00:26:11.559898922Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.522298036Z" level=info msg="shim disconnected" id=d3690ea282bf0b1de8f54e6d17c9d46ca1b390e69d8a232c7fe36f9081ae652d namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.522337042Z" level=warning msg="cleaning up after shim disconnected" id=d3690ea282bf0b1de8f54e6d17c9d46ca1b390e69d8a232c7fe36f9081ae652d namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.522341464Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1270]: time="2024-09-27T00:26:26.522693772Z" level=info msg="ignoring event" container=d3690ea282bf0b1de8f54e6d17c9d46ca1b390e69d8a232c7fe36f9081ae652d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:26:26 addons-514000 dockerd[1270]: time="2024-09-27T00:26:26.670347244Z" level=info msg="ignoring event" container=a191a857a257fa9ffb005d87a0a5b02e1ae52415968f7d00082be7a18224084f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.670708354Z" level=info msg="shim disconnected" id=a191a857a257fa9ffb005d87a0a5b02e1ae52415968f7d00082be7a18224084f namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.670791081Z" level=warning msg="cleaning up after shim disconnected" id=a191a857a257fa9ffb005d87a0a5b02e1ae52415968f7d00082be7a18224084f namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.670797672Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1270]: time="2024-09-27T00:26:26.708255230Z" level=info msg="ignoring event" container=2c137b39137324554afae6a87936cdaf95bb946a4a7f9e071a39b1ec9b8c88c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.709016708Z" level=info msg="shim disconnected" id=2c137b39137324554afae6a87936cdaf95bb946a4a7f9e071a39b1ec9b8c88c8 namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.709051084Z" level=warning msg="cleaning up after shim disconnected" id=2c137b39137324554afae6a87936cdaf95bb946a4a7f9e071a39b1ec9b8c88c8 namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.709055506Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.765521144Z" level=info msg="shim disconnected" id=2aaf3cc3634128a5d2884197cfd557c9e82c8665cd54e4b5c6b4e4fc6be32aea namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.765551598Z" level=warning msg="cleaning up after shim disconnected" id=2aaf3cc3634128a5d2884197cfd557c9e82c8665cd54e4b5c6b4e4fc6be32aea namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.765555770Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1270]: time="2024-09-27T00:26:26.766012081Z" level=info msg="ignoring event" container=2aaf3cc3634128a5d2884197cfd557c9e82c8665cd54e4b5c6b4e4fc6be32aea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:26:26 addons-514000 dockerd[1270]: time="2024-09-27T00:26:26.810695524Z" level=info msg="ignoring event" container=669f090b9921fb803427b94bf66581b58b348ab3345d975a1cca5235181725c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.811396970Z" level=info msg="shim disconnected" id=669f090b9921fb803427b94bf66581b58b348ab3345d975a1cca5235181725c6 namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.811465929Z" level=warning msg="cleaning up after shim disconnected" id=669f090b9921fb803427b94bf66581b58b348ab3345d975a1cca5235181725c6 namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.811472771Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:26:26 addons-514000 dockerd[1277]: time="2024-09-27T00:26:26.823110331Z" level=warning msg="cleanup warnings time=\"2024-09-27T00:26:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b91f37687002a       fc9db2894f4e4                                                                                                                                27 seconds ago      Exited              helper-pod                               0                   14995e9be02eb       helper-pod-delete-pvc-5c58b83f-e535-4b6e-8a9a-9b3242b1d8cf
	a7725fc5d1ca0       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                                              30 seconds ago      Exited              busybox                                  0                   e658b73146ed5       test-local-path
	e2265587c92e0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   415e2e6600cc5       gadget-9rbgm
	3385dd2d5b86c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   6360f876d85fb       gcp-auth-89d5ffd79-jw7m8
	576548efd22a9       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             10 minutes ago      Running             controller                               0                   72a9fb4be1af8       ingress-nginx-controller-bc57996ff-sn5zv
	f7fc81b8d9de1       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   967491d9461b9       csi-hostpathplugin-4w5vc
	50bd1e9534757       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   967491d9461b9       csi-hostpathplugin-4w5vc
	68dcc8f686279       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   967491d9461b9       csi-hostpathplugin-4w5vc
	37e6f1e9f2c71       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   967491d9461b9       csi-hostpathplugin-4w5vc
	21afc57c0eaaa       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   967491d9461b9       csi-hostpathplugin-4w5vc
	622089410cfd0       420193b27261a                                                                                                                                10 minutes ago      Exited              patch                                    1                   1c77d5e019c59       ingress-nginx-admission-patch-ltbr9
	d5282d48877c4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   10 minutes ago      Exited              create                                   0                   7899154b8f2ce       ingress-nginx-admission-create-7t4q6
	8f07015a54d91       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   7f7d91772fa34       csi-hostpath-resizer-0
	e4810b0921d95       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   7d68b32206bc7       csi-hostpath-attacher-0
	8c1220e1848c6       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   967491d9461b9       csi-hostpathplugin-4w5vc
	2c137b3913732       gcr.io/k8s-minikube/kube-registry-proxy@sha256:9fd683b2e47c5fded3410c69f414f05cdee737597569f52854347f889b118982                              10 minutes ago      Exited              registry-proxy                           0                   669f090b9921f       registry-proxy-pj8zh
	a191a857a257f       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             10 minutes ago      Exited              registry                                 0                   2aaf3cc363412       registry-66c9cd494c-gbgnl
	71be8bc583a31       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   d5c340c9532ec       snapshot-controller-56fcc65765-bvnnq
	c318f6bd0b154       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   5f088174112d9       snapshot-controller-56fcc65765-w9p5p
	296c609e9d705       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   dc266e510611e       local-path-provisioner-86d989889c-jvxtd
	68fd3127ed3e1       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             11 minutes ago      Running             minikube-ingress-dns                     0                   e987784f5f4b6       kube-ingress-dns-minikube
	9a5102a33ad37       ba04bb24b9575                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   b2113ec3ccb33       storage-provisioner
	392c25f88eabf       24a140c548c07                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   f9bf1b3e040c1       kube-proxy-qc8gh
	b11a4349a0cc4       2f6c962e7b831                                                                                                                                11 minutes ago      Running             coredns                                  0                   a01bfecec5fd8       coredns-7c65d6cfc9-cg8dv
	86f502248724d       d3f53a98c0a9d                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   ec69330eee2d9       kube-apiserver-addons-514000
	be32008d4cd2f       7f8aa378bb47d                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   79e964bfb2204       kube-scheduler-addons-514000
	4e656c99ee28f       279f381cb3736                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   1afdd966ff468       kube-controller-manager-addons-514000
	3377e4737a0aa       27e3830e14027                                                                                                                                11 minutes ago      Running             etcd                                     0                   6cb5eee774543       etcd-addons-514000
	
	
	==> controller_ingress [576548efd22a] <==
	W0927 00:16:16.273615       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0927 00:16:16.273698       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0927 00:16:16.276629       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0927 00:16:16.367488       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0927 00:16:16.382190       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0927 00:16:16.388973       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0927 00:16:16.398504       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5bcf1439-a687-4c7c-9178-b8102166838a", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0927 00:16:16.402272       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"ab593e79-119a-4ee5-bce7-e2d3c61a5013", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0927 00:16:16.402283       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"ddf30168-ec35-4492-840c-2d7afb36d20f", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0927 00:16:17.592320       7 nginx.go:317] "Starting NGINX process"
	I0927 00:16:17.592615       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0927 00:16:17.592683       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0927 00:16:17.594452       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0927 00:16:17.613984       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0927 00:16:17.614080       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-sn5zv"
	I0927 00:16:17.621745       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-sn5zv" node="addons-514000"
	I0927 00:16:17.637933       7 controller.go:213] "Backend successfully reloaded"
	I0927 00:16:17.638021       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0927 00:16:17.638044       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-sn5zv", UID:"2388f5c9-707a-4e3f-b108-daffe1f6f235", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [b11a4349a0cc] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.14:43758 - 59511 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000106459s
	[INFO] 10.244.0.14:43758 - 55864 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000128959s
	[INFO] 10.244.0.14:43758 - 55033 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000030792s
	[INFO] 10.244.0.14:43758 - 32972 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00009825s
	[INFO] 10.244.0.14:43758 - 56727 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000038334s
	[INFO] 10.244.0.14:43758 - 57471 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000024125s
	[INFO] 10.244.0.14:43758 - 42214 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000031083s
	[INFO] 10.244.0.14:43758 - 44043 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000024209s
	[INFO] 10.244.0.14:44833 - 61887 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00002925s
	[INFO] 10.244.0.14:44833 - 61506 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000016375s
	[INFO] 10.244.0.14:44964 - 9276 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000013584s
	[INFO] 10.244.0.14:44964 - 9192 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000025916s
	[INFO] 10.244.0.14:60002 - 61200 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000031333s
	[INFO] 10.244.0.14:60002 - 61117 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00001625s
	[INFO] 10.244.0.14:59559 - 61179 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000015125s
	[INFO] 10.244.0.14:59559 - 60881 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000015417s
	[INFO] 10.244.0.25:59923 - 38166 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00012054s
	[INFO] 10.244.0.25:56658 - 17903 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000033072s
	[INFO] 10.244.0.25:45757 - 27279 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000046484s
	[INFO] 10.244.0.25:51013 - 3272 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00002345s
	[INFO] 10.244.0.25:59661 - 44092 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000030155s
	[INFO] 10.244.0.25:54356 - 23531 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000023492s
	[INFO] 10.244.0.25:54781 - 27821 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.002328084s
	[INFO] 10.244.0.25:60917 - 60155 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00243392s
	
	
	==> describe nodes <==
	Name:               addons-514000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-514000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-514000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_26T17_14_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-514000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-514000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:14:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-514000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:26:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:25:59 +0000   Fri, 27 Sep 2024 00:14:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:25:59 +0000   Fri, 27 Sep 2024 00:14:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:25:59 +0000   Fri, 27 Sep 2024 00:14:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:25:59 +0000   Fri, 27 Sep 2024 00:14:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-514000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8196a34ab0148bf8fa81b6dd56a0754
	  System UUID:                a8196a34ab0148bf8fa81b6dd56a0754
	  Boot ID:                    9cb6e5e9-4625-4c20-ac6b-fc117440db59
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gadget                      gadget-9rbgm                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-jw7m8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-sn5zv    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-cg8dv                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-4w5vc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-addons-514000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-addons-514000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-514000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-qc8gh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-514000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-bvnnq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-w9p5p        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-jvxtd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11m   kube-proxy       
	  Normal  Starting                 11m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m   kubelet          Node addons-514000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m   kubelet          Node addons-514000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m   kubelet          Node addons-514000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                11m   kubelet          Node addons-514000 status is now: NodeReady
	  Normal  RegisteredNode           11m   node-controller  Node addons-514000 event: Registered Node addons-514000 in Controller
	
	
	==> dmesg <==
	[  +5.371188] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.063510] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.617226] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.146222] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.397446] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.105344] kauditd_printk_skb: 51 callbacks suppressed
	[Sep27 00:16] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.894645] kauditd_printk_skb: 52 callbacks suppressed
	[ +14.079567] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.073536] kauditd_printk_skb: 6 callbacks suppressed
	[ +18.857822] kauditd_printk_skb: 2 callbacks suppressed
	[Sep27 00:17] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.926166] kauditd_printk_skb: 20 callbacks suppressed
	[ +10.292587] kauditd_printk_skb: 2 callbacks suppressed
	[  +9.212017] kauditd_printk_skb: 2 callbacks suppressed
	[Sep27 00:21] kauditd_printk_skb: 2 callbacks suppressed
	[Sep27 00:25] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.620188] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.626369] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.300982] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.311200] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.475613] kauditd_printk_skb: 23 callbacks suppressed
	[Sep27 00:26] kauditd_printk_skb: 33 callbacks suppressed
	[  +8.383417] kauditd_printk_skb: 9 callbacks suppressed
	[ +16.007827] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [3377e4737a0a] <==
	{"level":"info","ts":"2024-09-27T00:14:51.798065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-27T00:14:51.805942Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-514000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T00:14:51.806063Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T00:14:51.806259Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:14:51.809865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T00:14:51.809977Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T00:14:51.810017Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T00:14:51.810381Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:14:51.814322Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T00:14:51.826339Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:14:51.846949Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-27T00:14:51.847027Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:14:51.847100Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:14:51.847128Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2024-09-27T00:15:08.176043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.746709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-b9j5k\" ","response":"range_response_count:1 size:5105"}
	{"level":"info","ts":"2024-09-27T00:15:08.176087Z","caller":"traceutil/trace.go:171","msg":"trace[1628063175] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-b9j5k; range_end:; response_count:1; response_revision:894; }","duration":"141.796154ms","start":"2024-09-27T00:15:08.034284Z","end":"2024-09-27T00:15:08.176080Z","steps":["trace[1628063175] 'range keys from in-memory index tree'  (duration: 141.685796ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:15:08.176415Z","caller":"traceutil/trace.go:171","msg":"trace[1645151612] transaction","detail":"{read_only:false; response_revision:895; number_of_response:1; }","duration":"127.296172ms","start":"2024-09-27T00:15:08.049115Z","end":"2024-09-27T00:15:08.176412Z","steps":["trace[1645151612] 'process raft request'  (duration: 127.163783ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:15:15.681235Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.780708ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-cg8dv\" ","response":"range_response_count:1 size:5093"}
	{"level":"info","ts":"2024-09-27T00:15:15.681312Z","caller":"traceutil/trace.go:171","msg":"trace[1120047615] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-cg8dv; range_end:; response_count:1; response_revision:965; }","duration":"212.860713ms","start":"2024-09-27T00:15:15.468444Z","end":"2024-09-27T00:15:15.681305Z","steps":["trace[1120047615] 'range keys from in-memory index tree'  (duration: 212.729747ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:15:15.681431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.998522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:15:15.681475Z","caller":"traceutil/trace.go:171","msg":"trace[110749854] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:965; }","duration":"161.010082ms","start":"2024-09-27T00:15:15.520427Z","end":"2024-09-27T00:15:15.681437Z","steps":["trace[110749854] 'range keys from in-memory index tree'  (duration: 160.969189ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:16:56.717980Z","caller":"traceutil/trace.go:171","msg":"trace[902421867] transaction","detail":"{read_only:false; response_revision:1439; number_of_response:1; }","duration":"253.206788ms","start":"2024-09-27T00:16:56.464763Z","end":"2024-09-27T00:16:56.717970Z","steps":["trace[902421867] 'process raft request'  (duration: 253.055257ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:24:52.196129Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1860}
	{"level":"info","ts":"2024-09-27T00:24:52.295282Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1860,"took":"95.458682ms","hash":4207817131,"current-db-size-bytes":9261056,"current-db-size":"9.3 MB","current-db-size-in-use-bytes":4947968,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-27T00:24:52.295724Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4207817131,"revision":1860,"compact-revision":-1}
	
	
	==> gcp-auth [3385dd2d5b86] <==
	2024/09/27 00:16:34 GCP Auth Webhook started!
	2024/09/27 00:16:51 Ready to marshal response ...
	2024/09/27 00:16:51 Ready to write response ...
	2024/09/27 00:16:52 Ready to marshal response ...
	2024/09/27 00:16:52 Ready to write response ...
	2024/09/27 00:17:15 Ready to marshal response ...
	2024/09/27 00:17:15 Ready to write response ...
	2024/09/27 00:17:15 Ready to marshal response ...
	2024/09/27 00:17:15 Ready to write response ...
	2024/09/27 00:17:15 Ready to marshal response ...
	2024/09/27 00:17:15 Ready to write response ...
	2024/09/27 00:25:16 Ready to marshal response ...
	2024/09/27 00:25:16 Ready to write response ...
	2024/09/27 00:25:16 Ready to marshal response ...
	2024/09/27 00:25:16 Ready to write response ...
	2024/09/27 00:25:16 Ready to marshal response ...
	2024/09/27 00:25:16 Ready to write response ...
	2024/09/27 00:25:26 Ready to marshal response ...
	2024/09/27 00:25:26 Ready to write response ...
	2024/09/27 00:25:50 Ready to marshal response ...
	2024/09/27 00:25:50 Ready to write response ...
	2024/09/27 00:25:50 Ready to marshal response ...
	2024/09/27 00:25:50 Ready to write response ...
	2024/09/27 00:25:59 Ready to marshal response ...
	2024/09/27 00:25:59 Ready to write response ...
	
	
	==> kernel <==
	 00:26:27 up 11 min,  0 users,  load average: 0.50, 0.60, 0.45
	Linux addons-514000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [86f502248724] <==
	E0927 00:16:12.575449       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.231.64:443: connect: connection refused" logger="UnhandledError"
	W0927 00:16:12.589622       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.231.64:443: connect: connection refused
	E0927 00:16:12.589742       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.231.64:443: connect: connection refused" logger="UnhandledError"
	W0927 00:16:31.526442       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.231.64:443: connect: connection refused
	E0927 00:16:31.526479       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.231.64:443: connect: connection refused" logger="UnhandledError"
	I0927 00:16:51.362403       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0927 00:16:51.373116       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0927 00:17:04.751874       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0927 00:17:04.757075       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0927 00:17:04.922280       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 00:17:04.937622       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 00:17:04.956121       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0927 00:17:05.157735       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0927 00:17:05.181952       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0927 00:17:05.184463       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 00:17:05.302180       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0927 00:17:05.855639       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0927 00:17:06.182786       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0927 00:17:06.184556       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0927 00:17:06.184661       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0927 00:17:06.230707       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0927 00:17:06.302554       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W0927 00:17:06.309953       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	I0927 00:25:16.405382       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.227.237"}
	I0927 00:26:20.776548       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [4e656c99ee28] <==
	I0927 00:25:28.644745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-514000"
	I0927 00:25:29.675926       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="1.669µs"
	W0927 00:25:32.534062       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:25:32.534179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:25:39.756831       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0927 00:25:40.001427       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="2.046µs"
	W0927 00:25:47.044142       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:25:47.044320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:25:50.084771       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0927 00:25:51.825309       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:25:51.825355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:25:54.915134       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:25:54.915164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:25:55.624375       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:25:55.624680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:25:59.352700       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-514000"
	W0927 00:26:01.369533       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:01.369627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:26:05.165444       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5b584cc74" duration="1.712µs"
	W0927 00:26:09.758896       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:09.759051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:26:10.395733       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="1.42µs"
	W0927 00:26:17.488887       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:17.489016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:26:26.642768       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="1.961µs"
	
	
	==> kube-proxy [392c25f88eab] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:15:01.437542       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:15:01.461489       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0927 00:15:01.461523       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:15:01.491181       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:15:01.491205       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:15:01.491232       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:15:01.491940       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:15:01.492059       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:15:01.492066       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:15:01.494171       1 config.go:199] "Starting service config controller"
	I0927 00:15:01.494185       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:15:01.494193       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:15:01.494196       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:15:01.494368       1 config.go:328] "Starting node config controller"
	I0927 00:15:01.494377       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:15:01.597657       1 shared_informer.go:320] Caches are synced for node config
	I0927 00:15:01.597706       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:15:01.597727       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [be32008d4cd2] <==
	W0927 00:14:52.708677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:14:52.708686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:14:52.708744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:14:52.708752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:14:52.708791       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:14:52.708798       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:14:52.708842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:14:52.708850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:14:52.708895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:14:52.708904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:14:52.708929       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 00:14:52.708936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:14:52.708976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:14:52.708992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:14:53.597275       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:14:53.597341       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:14:53.619210       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:14:53.619241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:14:53.628535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:14:53.628574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:14:53.653982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:14:53.654034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:14:53.769721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:14:53.769821       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 00:14:55.605732       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:26:11 addons-514000 kubelet[2050]: I0927 00:26:11.780636    2050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a135456e-4dc7-40b1-8fef-cd0581a32c60-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "a135456e-4dc7-40b1-8fef-cd0581a32c60" (UID: "a135456e-4dc7-40b1-8fef-cd0581a32c60"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 27 00:26:11 addons-514000 kubelet[2050]: I0927 00:26:11.786207    2050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a135456e-4dc7-40b1-8fef-cd0581a32c60-kube-api-access-66n5j" (OuterVolumeSpecName: "kube-api-access-66n5j") pod "a135456e-4dc7-40b1-8fef-cd0581a32c60" (UID: "a135456e-4dc7-40b1-8fef-cd0581a32c60"). InnerVolumeSpecName "kube-api-access-66n5j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:26:11 addons-514000 kubelet[2050]: I0927 00:26:11.882116    2050 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-66n5j\" (UniqueName: \"kubernetes.io/projected/a135456e-4dc7-40b1-8fef-cd0581a32c60-kube-api-access-66n5j\") on node \"addons-514000\" DevicePath \"\""
	Sep 27 00:26:11 addons-514000 kubelet[2050]: I0927 00:26:11.882200    2050 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a135456e-4dc7-40b1-8fef-cd0581a32c60-tmp-dir\") on node \"addons-514000\" DevicePath \"\""
	Sep 27 00:26:12 addons-514000 kubelet[2050]: I0927 00:26:12.229902    2050 scope.go:117] "RemoveContainer" containerID="d388d28e9e9b53bbd2050b3b0471bf7b7546426a6d8e9cb78ec308ecce47d378"
	Sep 27 00:26:12 addons-514000 kubelet[2050]: I0927 00:26:12.251377    2050 scope.go:117] "RemoveContainer" containerID="d388d28e9e9b53bbd2050b3b0471bf7b7546426a6d8e9cb78ec308ecce47d378"
	Sep 27 00:26:12 addons-514000 kubelet[2050]: E0927 00:26:12.252077    2050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: d388d28e9e9b53bbd2050b3b0471bf7b7546426a6d8e9cb78ec308ecce47d378" containerID="d388d28e9e9b53bbd2050b3b0471bf7b7546426a6d8e9cb78ec308ecce47d378"
	Sep 27 00:26:12 addons-514000 kubelet[2050]: I0927 00:26:12.252106    2050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"d388d28e9e9b53bbd2050b3b0471bf7b7546426a6d8e9cb78ec308ecce47d378"} err="failed to get container status \"d388d28e9e9b53bbd2050b3b0471bf7b7546426a6d8e9cb78ec308ecce47d378\": rpc error: code = Unknown desc = Error response from daemon: No such container: d388d28e9e9b53bbd2050b3b0471bf7b7546426a6d8e9cb78ec308ecce47d378"
	Sep 27 00:26:12 addons-514000 kubelet[2050]: I0927 00:26:12.562166    2050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a135456e-4dc7-40b1-8fef-cd0581a32c60" path="/var/lib/kubelet/pods/a135456e-4dc7-40b1-8fef-cd0581a32c60/volumes"
	Sep 27 00:26:14 addons-514000 kubelet[2050]: I0927 00:26:14.544120    2050 scope.go:117] "RemoveContainer" containerID="e2265587c92e0dc863657607a868b7da6c8a7518e8f48257e3f704ba3dac9275"
	Sep 27 00:26:14 addons-514000 kubelet[2050]: E0927 00:26:14.544242    2050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-9rbgm_gadget(39b26726-c988-485e-af26-48900aa73ca5)\"" pod="gadget/gadget-9rbgm" podUID="39b26726-c988-485e-af26-48900aa73ca5"
	Sep 27 00:26:18 addons-514000 kubelet[2050]: E0927 00:26:18.548050    2050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="d7f645c1-2d20-42e7-81c3-8b3c81a7309d"
	Sep 27 00:26:22 addons-514000 kubelet[2050]: E0927 00:26:22.547222    2050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9"
	Sep 27 00:26:26 addons-514000 kubelet[2050]: I0927 00:26:26.646200    2050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9-gcp-creds\") pod \"6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9\" (UID: \"6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9\") "
	Sep 27 00:26:26 addons-514000 kubelet[2050]: I0927 00:26:26.646230    2050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njrbl\" (UniqueName: \"kubernetes.io/projected/6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9-kube-api-access-njrbl\") pod \"6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9\" (UID: \"6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9\") "
	Sep 27 00:26:26 addons-514000 kubelet[2050]: I0927 00:26:26.646393    2050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9" (UID: "6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 00:26:26 addons-514000 kubelet[2050]: I0927 00:26:26.649665    2050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9-kube-api-access-njrbl" (OuterVolumeSpecName: "kube-api-access-njrbl") pod "6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9" (UID: "6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9"). InnerVolumeSpecName "kube-api-access-njrbl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:26:26 addons-514000 kubelet[2050]: I0927 00:26:26.747162    2050 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9-gcp-creds\") on node \"addons-514000\" DevicePath \"\""
	Sep 27 00:26:26 addons-514000 kubelet[2050]: I0927 00:26:26.747177    2050 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-njrbl\" (UniqueName: \"kubernetes.io/projected/6b8f5e4f-f74f-48ff-8500-3ff4ec9edab9-kube-api-access-njrbl\") on node \"addons-514000\" DevicePath \"\""
	Sep 27 00:26:26 addons-514000 kubelet[2050]: I0927 00:26:26.948478    2050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgxpf\" (UniqueName: \"kubernetes.io/projected/3e581139-c091-4cb0-9d99-224fdfd570e6-kube-api-access-wgxpf\") pod \"3e581139-c091-4cb0-9d99-224fdfd570e6\" (UID: \"3e581139-c091-4cb0-9d99-224fdfd570e6\") "
	Sep 27 00:26:26 addons-514000 kubelet[2050]: I0927 00:26:26.949134    2050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e581139-c091-4cb0-9d99-224fdfd570e6-kube-api-access-wgxpf" (OuterVolumeSpecName: "kube-api-access-wgxpf") pod "3e581139-c091-4cb0-9d99-224fdfd570e6" (UID: "3e581139-c091-4cb0-9d99-224fdfd570e6"). InnerVolumeSpecName "kube-api-access-wgxpf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:26:27 addons-514000 kubelet[2050]: I0927 00:26:27.049373    2050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq9qx\" (UniqueName: \"kubernetes.io/projected/e4e67464-6eb1-44d1-9d8c-808957ab325e-kube-api-access-zq9qx\") pod \"e4e67464-6eb1-44d1-9d8c-808957ab325e\" (UID: \"e4e67464-6eb1-44d1-9d8c-808957ab325e\") "
	Sep 27 00:26:27 addons-514000 kubelet[2050]: I0927 00:26:27.049414    2050 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wgxpf\" (UniqueName: \"kubernetes.io/projected/3e581139-c091-4cb0-9d99-224fdfd570e6-kube-api-access-wgxpf\") on node \"addons-514000\" DevicePath \"\""
	Sep 27 00:26:27 addons-514000 kubelet[2050]: I0927 00:26:27.050046    2050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e67464-6eb1-44d1-9d8c-808957ab325e-kube-api-access-zq9qx" (OuterVolumeSpecName: "kube-api-access-zq9qx") pod "e4e67464-6eb1-44d1-9d8c-808957ab325e" (UID: "e4e67464-6eb1-44d1-9d8c-808957ab325e"). InnerVolumeSpecName "kube-api-access-zq9qx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:26:27 addons-514000 kubelet[2050]: I0927 00:26:27.150031    2050 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zq9qx\" (UniqueName: \"kubernetes.io/projected/e4e67464-6eb1-44d1-9d8c-808957ab325e-kube-api-access-zq9qx\") on node \"addons-514000\" DevicePath \"\""
	
	
	==> storage-provisioner [9a5102a33ad3] <==
	I0927 00:15:02.478264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:15:02.510385       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:15:02.510419       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:15:02.521744       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:15:02.521903       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-514000_9471ffce-f584-475c-a63a-872f18661969!
	I0927 00:15:02.522387       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cee0dfca-3e7d-4be2-a374-e5a9dbd98926", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-514000_9471ffce-f584-475c-a63a-872f18661969 became leader
	I0927 00:15:02.622454       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-514000_9471ffce-f584-475c-a63a-872f18661969!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-514000 -n addons-514000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-514000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox registry-test ingress-nginx-admission-create-7t4q6 ingress-nginx-admission-patch-ltbr9 registry-66c9cd494c-gbgnl registry-proxy-pj8zh
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-514000 describe pod busybox registry-test ingress-nginx-admission-create-7t4q6 ingress-nginx-admission-patch-ltbr9 registry-66c9cd494c-gbgnl registry-proxy-pj8zh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-514000 describe pod busybox registry-test ingress-nginx-admission-create-7t4q6 ingress-nginx-admission-patch-ltbr9 registry-66c9cd494c-gbgnl registry-proxy-pj8zh: exit status 1 (62.814125ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-514000/192.168.105.2
	Start Time:       Thu, 26 Sep 2024 17:17:15 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q6572 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-q6572:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m12s                   default-scheduler  Successfully assigned default/busybox to addons-514000
	  Warning  Failed     7m53s (x6 over 9m11s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m42s (x4 over 9m12s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m42s (x4 over 9m12s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m42s (x4 over 9m12s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m11s (x21 over 9m11s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-test" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-7t4q6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ltbr9" not found
	Error from server (NotFound): pods "registry-66c9cd494c-gbgnl" not found
	Error from server (NotFound): pods "registry-proxy-pj8zh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-514000 describe pod busybox registry-test ingress-nginx-admission-create-7t4q6 ingress-nginx-admission-patch-ltbr9 registry-66c9cd494c-gbgnl registry-proxy-pj8zh: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.30s)

                                                
                                    
x
+
TestCertOptions (10.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-759000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-759000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.951123041s)

                                                
                                                
-- stdout --
	* [cert-options-759000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-759000" primary control-plane node in "cert-options-759000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-759000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-759000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-759000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-759000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.665917ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-759000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-759000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-759000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-759000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-759000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-759000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.222084ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-759000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-759000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-759000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-759000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-759000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-26 17:57:01.095987 -0700 PDT m=+2590.774298501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-759000 -n cert-options-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-759000 -n cert-options-759000: exit status 7 (29.860209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-759000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-759000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-759000
--- FAIL: TestCertOptions (10.22s)

                                                
                                    
x
+
TestCertExpiration (195.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-671000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-671000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.983503s)

                                                
                                                
-- stdout --
	* [cert-expiration-671000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-671000" primary control-plane node in "cert-expiration-671000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-671000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-671000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-671000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-671000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.175193s)

                                                
                                                
-- stdout --
	* [cert-expiration-671000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-671000" primary control-plane node in "cert-expiration-671000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-671000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-671000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-671000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-671000" primary control-plane node in "cert-expiration-671000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-671000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-671000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-26 18:00:01.079102 -0700 PDT m=+2770.762440542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-671000 -n cert-expiration-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-671000 -n cert-expiration-671000: exit status 7 (32.334542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-671000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-671000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-671000
--- FAIL: TestCertExpiration (195.28s)

                                                
                                    
x
+
TestDockerFlags (10.02s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-485000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-485000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.786078875s)

                                                
                                                
-- stdout --
	* [docker-flags-485000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-485000" primary control-plane node in "docker-flags-485000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-485000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:56:40.994769    4013 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:56:40.994886    4013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:56:40.994890    4013 out.go:358] Setting ErrFile to fd 2...
	I0926 17:56:40.994892    4013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:56:40.995016    4013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:56:40.996157    4013 out.go:352] Setting JSON to false
	I0926 17:56:41.012229    4013 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3364,"bootTime":1727395237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:56:41.012303    4013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:56:41.019633    4013 out.go:177] * [docker-flags-485000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:56:41.028618    4013 notify.go:220] Checking for updates...
	I0926 17:56:41.035508    4013 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:56:41.045563    4013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:56:41.057472    4013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:56:41.061414    4013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:56:41.065620    4013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:56:41.069472    4013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:56:41.072870    4013 config.go:182] Loaded profile config "force-systemd-flag-879000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:56:41.072947    4013 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:56:41.072994    4013 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:56:41.077456    4013 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 17:56:41.083508    4013 start.go:297] selected driver: qemu2
	I0926 17:56:41.083514    4013 start.go:901] validating driver "qemu2" against <nil>
	I0926 17:56:41.083520    4013 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:56:41.086009    4013 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:56:41.088497    4013 out.go:177] * Automatically selected the socket_vmnet network
	I0926 17:56:41.091655    4013 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0926 17:56:41.091676    4013 cni.go:84] Creating CNI manager for ""
	I0926 17:56:41.091711    4013 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:56:41.091716    4013 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 17:56:41.091754    4013 start.go:340] cluster config:
	{Name:docker-flags-485000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-485000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:56:41.095603    4013 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:56:41.103519    4013 out.go:177] * Starting "docker-flags-485000" primary control-plane node in "docker-flags-485000" cluster
	I0926 17:56:41.111137    4013 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:56:41.111151    4013 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:56:41.111168    4013 cache.go:56] Caching tarball of preloaded images
	I0926 17:56:41.111241    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:56:41.111248    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:56:41.111307    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/docker-flags-485000/config.json ...
	I0926 17:56:41.111320    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/docker-flags-485000/config.json: {Name:mk98fc95dfd56f1cf4e89b334b5909a3477d320f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:56:41.111753    4013 start.go:360] acquireMachinesLock for docker-flags-485000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:56:41.111798    4013 start.go:364] duration metric: took 37.542µs to acquireMachinesLock for "docker-flags-485000"
	I0926 17:56:41.111813    4013 start.go:93] Provisioning new machine with config: &{Name:docker-flags-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-485000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:56:41.111851    4013 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:56:41.116588    4013 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 17:56:41.137462    4013 start.go:159] libmachine.API.Create for "docker-flags-485000" (driver="qemu2")
	I0926 17:56:41.137499    4013 client.go:168] LocalClient.Create starting
	I0926 17:56:41.137583    4013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:56:41.137619    4013 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:41.137631    4013 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:41.137680    4013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:56:41.137708    4013 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:41.137721    4013 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:41.138147    4013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:56:41.300132    4013 main.go:141] libmachine: Creating SSH key...
	I0926 17:56:41.328381    4013 main.go:141] libmachine: Creating Disk image...
	I0926 17:56:41.328386    4013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:56:41.328574    4013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2
	I0926 17:56:41.337677    4013 main.go:141] libmachine: STDOUT: 
	I0926 17:56:41.337691    4013 main.go:141] libmachine: STDERR: 
	I0926 17:56:41.337737    4013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2 +20000M
	I0926 17:56:41.345684    4013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:56:41.345697    4013 main.go:141] libmachine: STDERR: 
	I0926 17:56:41.345709    4013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2
	I0926 17:56:41.345713    4013 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:56:41.345727    4013 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:56:41.345755    4013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:9f:f8:16:b1:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2
	I0926 17:56:41.347333    4013 main.go:141] libmachine: STDOUT: 
	I0926 17:56:41.347346    4013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:56:41.347363    4013 client.go:171] duration metric: took 209.863625ms to LocalClient.Create
	I0926 17:56:43.349498    4013 start.go:128] duration metric: took 2.23768775s to createHost
	I0926 17:56:43.349592    4013 start.go:83] releasing machines lock for "docker-flags-485000", held for 2.237818916s
	W0926 17:56:43.349647    4013 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:43.379842    4013 out.go:177] * Deleting "docker-flags-485000" in qemu2 ...
	W0926 17:56:43.406763    4013 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:43.406780    4013 start.go:729] Will try again in 5 seconds ...
	I0926 17:56:48.408770    4013 start.go:360] acquireMachinesLock for docker-flags-485000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:56:48.409014    4013 start.go:364] duration metric: took 192.916µs to acquireMachinesLock for "docker-flags-485000"
	I0926 17:56:48.409073    4013 start.go:93] Provisioning new machine with config: &{Name:docker-flags-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-485000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:56:48.409185    4013 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:56:48.422078    4013 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 17:56:48.462666    4013 start.go:159] libmachine.API.Create for "docker-flags-485000" (driver="qemu2")
	I0926 17:56:48.462718    4013 client.go:168] LocalClient.Create starting
	I0926 17:56:48.462833    4013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:56:48.462902    4013 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:48.462918    4013 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:48.462992    4013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:56:48.463037    4013 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:48.463052    4013 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:48.463961    4013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:56:48.643220    4013 main.go:141] libmachine: Creating SSH key...
	I0926 17:56:48.682399    4013 main.go:141] libmachine: Creating Disk image...
	I0926 17:56:48.682404    4013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:56:48.682577    4013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2
	I0926 17:56:48.691654    4013 main.go:141] libmachine: STDOUT: 
	I0926 17:56:48.691669    4013 main.go:141] libmachine: STDERR: 
	I0926 17:56:48.691727    4013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2 +20000M
	I0926 17:56:48.699440    4013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:56:48.699453    4013 main.go:141] libmachine: STDERR: 
	I0926 17:56:48.699463    4013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2
	I0926 17:56:48.699468    4013 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:56:48.699481    4013 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:56:48.699517    4013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:c5:3b:8c:f5:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/docker-flags-485000/disk.qcow2
	I0926 17:56:48.701127    4013 main.go:141] libmachine: STDOUT: 
	I0926 17:56:48.701139    4013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:56:48.701154    4013 client.go:171] duration metric: took 238.438916ms to LocalClient.Create
	I0926 17:56:50.703275    4013 start.go:128] duration metric: took 2.294129125s to createHost
	I0926 17:56:50.703369    4013 start.go:83] releasing machines lock for "docker-flags-485000", held for 2.29439925s
	W0926 17:56:50.703694    4013 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-485000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-485000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:50.720236    4013 out.go:201] 
	W0926 17:56:50.723396    4013 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:56:50.723421    4013 out.go:270] * 
	* 
	W0926 17:56:50.725837    4013 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:56:50.740228    4013 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-485000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-485000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-485000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (75.201834ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-485000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-485000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-485000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-485000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-485000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-485000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-485000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-485000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-485000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.77075ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-485000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-485000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-485000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-485000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-485000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-485000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-26 17:56:50.877795 -0700 PDT m=+2580.555821292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-485000 -n docker-flags-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-485000 -n docker-flags-485000: exit status 7 (29.035042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-485000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-485000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-485000
--- FAIL: TestDockerFlags (10.02s)

                                                
                                    
x
+
TestForceSystemdFlag (10.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-879000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-879000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.117283791s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-879000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-879000" primary control-plane node in "force-systemd-flag-879000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-879000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:56:35.644472    3992 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:56:35.644597    3992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:56:35.644600    3992 out.go:358] Setting ErrFile to fd 2...
	I0926 17:56:35.644602    3992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:56:35.644724    3992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:56:35.645806    3992 out.go:352] Setting JSON to false
	I0926 17:56:35.661797    3992 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3358,"bootTime":1727395237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:56:35.661869    3992 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:56:35.668732    3992 out.go:177] * [force-systemd-flag-879000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:56:35.688795    3992 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:56:35.688806    3992 notify.go:220] Checking for updates...
	I0926 17:56:35.701697    3992 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:56:35.705725    3992 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:56:35.708687    3992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:56:35.711695    3992 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:56:35.714773    3992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:56:35.717992    3992 config.go:182] Loaded profile config "force-systemd-env-796000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:56:35.718076    3992 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:56:35.718132    3992 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:56:35.722717    3992 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 17:56:35.729667    3992 start.go:297] selected driver: qemu2
	I0926 17:56:35.729673    3992 start.go:901] validating driver "qemu2" against <nil>
	I0926 17:56:35.729690    3992 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:56:35.732113    3992 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:56:35.735734    3992 out.go:177] * Automatically selected the socket_vmnet network
	I0926 17:56:35.738794    3992 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 17:56:35.738817    3992 cni.go:84] Creating CNI manager for ""
	I0926 17:56:35.738842    3992 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:56:35.738847    3992 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 17:56:35.738879    3992 start.go:340] cluster config:
	{Name:force-systemd-flag-879000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:56:35.742965    3992 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:56:35.749718    3992 out.go:177] * Starting "force-systemd-flag-879000" primary control-plane node in "force-systemd-flag-879000" cluster
	I0926 17:56:35.753562    3992 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:56:35.753579    3992 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:56:35.753589    3992 cache.go:56] Caching tarball of preloaded images
	I0926 17:56:35.753652    3992 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:56:35.753658    3992 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:56:35.753714    3992 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/force-systemd-flag-879000/config.json ...
	I0926 17:56:35.753726    3992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/force-systemd-flag-879000/config.json: {Name:mk4ede92bc4ac5e48218049df8fd9aa92a8c663a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:56:35.754250    3992 start.go:360] acquireMachinesLock for force-systemd-flag-879000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:56:35.754293    3992 start.go:364] duration metric: took 34µs to acquireMachinesLock for "force-systemd-flag-879000"
	I0926 17:56:35.754307    3992 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-879000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:56:35.754346    3992 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:56:35.770000    3992 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 17:56:35.789840    3992 start.go:159] libmachine.API.Create for "force-systemd-flag-879000" (driver="qemu2")
	I0926 17:56:35.789890    3992 client.go:168] LocalClient.Create starting
	I0926 17:56:35.789954    3992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:56:35.789990    3992 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:35.790001    3992 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:35.790060    3992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:56:35.790091    3992 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:35.790100    3992 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:35.790504    3992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:56:35.952107    3992 main.go:141] libmachine: Creating SSH key...
	I0926 17:56:36.233997    3992 main.go:141] libmachine: Creating Disk image...
	I0926 17:56:36.234004    3992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:56:36.234259    3992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2
	I0926 17:56:36.244120    3992 main.go:141] libmachine: STDOUT: 
	I0926 17:56:36.244144    3992 main.go:141] libmachine: STDERR: 
	I0926 17:56:36.244195    3992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2 +20000M
	I0926 17:56:36.252088    3992 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:56:36.252102    3992 main.go:141] libmachine: STDERR: 
	I0926 17:56:36.252117    3992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2
	I0926 17:56:36.252123    3992 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:56:36.252133    3992 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:56:36.252171    3992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:94:25:75:e8:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2
	I0926 17:56:36.253813    3992 main.go:141] libmachine: STDOUT: 
	I0926 17:56:36.253826    3992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:56:36.253851    3992 client.go:171] duration metric: took 463.967292ms to LocalClient.Create
	I0926 17:56:38.255974    3992 start.go:128] duration metric: took 2.501675042s to createHost
	I0926 17:56:38.256035    3992 start.go:83] releasing machines lock for "force-systemd-flag-879000", held for 2.501801s
	W0926 17:56:38.256120    3992 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:38.274328    3992 out.go:177] * Deleting "force-systemd-flag-879000" in qemu2 ...
	W0926 17:56:38.312539    3992 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:38.312555    3992 start.go:729] Will try again in 5 seconds ...
	I0926 17:56:43.314637    3992 start.go:360] acquireMachinesLock for force-systemd-flag-879000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:56:43.349694    3992 start.go:364] duration metric: took 34.897667ms to acquireMachinesLock for "force-systemd-flag-879000"
	I0926 17:56:43.349832    3992 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-879000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:56:43.350106    3992 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:56:43.364929    3992 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 17:56:43.415476    3992 start.go:159] libmachine.API.Create for "force-systemd-flag-879000" (driver="qemu2")
	I0926 17:56:43.415538    3992 client.go:168] LocalClient.Create starting
	I0926 17:56:43.415662    3992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:56:43.415724    3992 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:43.415740    3992 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:43.415799    3992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:56:43.415842    3992 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:43.415853    3992 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:43.419363    3992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:56:43.603703    3992 main.go:141] libmachine: Creating SSH key...
	I0926 17:56:43.658077    3992 main.go:141] libmachine: Creating Disk image...
	I0926 17:56:43.658083    3992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:56:43.658267    3992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2
	I0926 17:56:43.667661    3992 main.go:141] libmachine: STDOUT: 
	I0926 17:56:43.667687    3992 main.go:141] libmachine: STDERR: 
	I0926 17:56:43.667749    3992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2 +20000M
	I0926 17:56:43.675563    3992 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:56:43.675577    3992 main.go:141] libmachine: STDERR: 
	I0926 17:56:43.675590    3992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2
	I0926 17:56:43.675598    3992 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:56:43.675616    3992 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:56:43.675644    3992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:8d:d9:12:f3:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-flag-879000/disk.qcow2
	I0926 17:56:43.677305    3992 main.go:141] libmachine: STDOUT: 
	I0926 17:56:43.677322    3992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:56:43.677336    3992 client.go:171] duration metric: took 261.798833ms to LocalClient.Create
	I0926 17:56:45.679453    3992 start.go:128] duration metric: took 2.329385167s to createHost
	I0926 17:56:45.679495    3992 start.go:83] releasing machines lock for "force-systemd-flag-879000", held for 2.3298435s
	W0926 17:56:45.679784    3992 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-879000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-879000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:45.701007    3992 out.go:201] 
	W0926 17:56:45.708934    3992 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:56:45.708957    3992 out.go:270] * 
	* 
	W0926 17:56:45.711100    3992 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:56:45.719819    3992 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-879000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-879000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-879000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.602833ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-879000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-879000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-879000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-26 17:56:45.81191 -0700 PDT m=+2575.489794917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-879000 -n force-systemd-flag-879000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-879000 -n force-systemd-flag-879000: exit status 7 (35.335125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-879000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-879000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-879000
--- FAIL: TestForceSystemdFlag (10.31s)

                                                
                                    
x
+
TestForceSystemdEnv (11.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-796000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I0926 17:56:30.339259    1597 install.go:79] stdout: 
W0926 17:56:30.339426    1597 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0926 17:56:30.339448    1597 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/001/docker-machine-driver-hyperkit]
I0926 17:56:30.350920    1597 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/001/docker-machine-driver-hyperkit]
I0926 17:56:30.359864    1597 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/001/docker-machine-driver-hyperkit]
I0926 17:56:30.368672    1597 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/001/docker-machine-driver-hyperkit]
I0926 17:56:30.384758    1597 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0926 17:56:30.384856    1597 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I0926 17:56:32.191624    1597 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0926 17:56:32.191647    1597 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0926 17:56:32.191703    1597 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0926 17:56:32.191743    1597 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/002/docker-machine-driver-hyperkit
I0926 17:56:32.583422    1597 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x106e76d40 0x106e76d40 0x106e76d40 0x106e76d40 0x106e76d40 0x106e76d40 0x106e76d40] Decompressors:map[bz2:0x1400047f4f0 gz:0x1400047f4f8 tar:0x1400047f4a0 tar.bz2:0x1400047f4b0 tar.gz:0x1400047f4c0 tar.xz:0x1400047f4d0 tar.zst:0x1400047f4e0 tbz2:0x1400047f4b0 tgz:0x1400047f4c0 txz:0x1400047f4d0 tzst:0x1400047f4e0 xz:0x1400047f500 zip:0x1400047f510 zst:0x1400047f508] Getters:map[file:0x14001a4b980 http:0x140006267d0 https:0x14000626820] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0926 17:56:32.583541    1597 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/002/docker-machine-driver-hyperkit
E0926 17:56:35.072765    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
I0926 17:56:35.573616    1597 install.go:79] stdout: 
W0926 17:56:35.573764    1597 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0926 17:56:35.573792    1597 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/002/docker-machine-driver-hyperkit]
I0926 17:56:35.587561    1597 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/002/docker-machine-driver-hyperkit]
I0926 17:56:35.599006    1597 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/002/docker-machine-driver-hyperkit]
I0926 17:56:35.607652    1597 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-796000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.171177625s)

                                                
                                                
-- stdout --
	* [force-systemd-env-796000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-796000" primary control-plane node in "force-systemd-env-796000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-796000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:56:29.628166    3957 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:56:29.628264    3957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:56:29.628266    3957 out.go:358] Setting ErrFile to fd 2...
	I0926 17:56:29.628269    3957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:56:29.628394    3957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:56:29.629519    3957 out.go:352] Setting JSON to false
	I0926 17:56:29.645932    3957 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3352,"bootTime":1727395237,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:56:29.645998    3957 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:56:29.652159    3957 out.go:177] * [force-systemd-env-796000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:56:29.660059    3957 notify.go:220] Checking for updates...
	I0926 17:56:29.664964    3957 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:56:29.672807    3957 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:56:29.680953    3957 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:56:29.687914    3957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:56:29.694919    3957 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:56:29.702967    3957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0926 17:56:29.707183    3957 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:56:29.707230    3957 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:56:29.710931    3957 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 17:56:29.718983    3957 start.go:297] selected driver: qemu2
	I0926 17:56:29.718989    3957 start.go:901] validating driver "qemu2" against <nil>
	I0926 17:56:29.718994    3957 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:56:29.721312    3957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:56:29.725074    3957 out.go:177] * Automatically selected the socket_vmnet network
	I0926 17:56:29.728988    3957 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 17:56:29.729004    3957 cni.go:84] Creating CNI manager for ""
	I0926 17:56:29.729029    3957 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:56:29.729033    3957 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 17:56:29.729063    3957 start.go:340] cluster config:
	{Name:force-systemd-env-796000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-796000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:56:29.732792    3957 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:56:29.737751    3957 out.go:177] * Starting "force-systemd-env-796000" primary control-plane node in "force-systemd-env-796000" cluster
	I0926 17:56:29.742048    3957 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:56:29.742062    3957 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:56:29.742071    3957 cache.go:56] Caching tarball of preloaded images
	I0926 17:56:29.742128    3957 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:56:29.742133    3957 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:56:29.742195    3957 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/force-systemd-env-796000/config.json ...
	I0926 17:56:29.742205    3957 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/force-systemd-env-796000/config.json: {Name:mkf3da4f5e155dcf5baf9635e705978d2c791084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:56:29.742399    3957 start.go:360] acquireMachinesLock for force-systemd-env-796000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:56:29.742431    3957 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "force-systemd-env-796000"
	I0926 17:56:29.742443    3957 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-796000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-796000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:56:29.742468    3957 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:56:29.749918    3957 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 17:56:29.766073    3957 start.go:159] libmachine.API.Create for "force-systemd-env-796000" (driver="qemu2")
	I0926 17:56:29.766105    3957 client.go:168] LocalClient.Create starting
	I0926 17:56:29.766171    3957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:56:29.766203    3957 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:29.766212    3957 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:29.766254    3957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:56:29.766278    3957 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:29.766287    3957 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:29.766652    3957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:56:29.929332    3957 main.go:141] libmachine: Creating SSH key...
	I0926 17:56:30.050869    3957 main.go:141] libmachine: Creating Disk image...
	I0926 17:56:30.050876    3957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:56:30.051064    3957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2
	I0926 17:56:30.060461    3957 main.go:141] libmachine: STDOUT: 
	I0926 17:56:30.060475    3957 main.go:141] libmachine: STDERR: 
	I0926 17:56:30.060542    3957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2 +20000M
	I0926 17:56:30.068619    3957 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:56:30.068633    3957 main.go:141] libmachine: STDERR: 
	I0926 17:56:30.068657    3957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2
	I0926 17:56:30.068663    3957 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:56:30.068676    3957 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:56:30.068705    3957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:b0:35:54:a5:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2
	I0926 17:56:30.070344    3957 main.go:141] libmachine: STDOUT: 
	I0926 17:56:30.070358    3957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:56:30.070376    3957 client.go:171] duration metric: took 304.273958ms to LocalClient.Create
	I0926 17:56:32.072405    3957 start.go:128] duration metric: took 2.329995208s to createHost
	I0926 17:56:32.072438    3957 start.go:83] releasing machines lock for "force-systemd-env-796000", held for 2.330066792s
	W0926 17:56:32.072452    3957 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:32.092964    3957 out.go:177] * Deleting "force-systemd-env-796000" in qemu2 ...
	W0926 17:56:32.109990    3957 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:32.110002    3957 start.go:729] Will try again in 5 seconds ...
	I0926 17:56:37.112147    3957 start.go:360] acquireMachinesLock for force-systemd-env-796000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:56:38.256188    3957 start.go:364] duration metric: took 1.143937042s to acquireMachinesLock for "force-systemd-env-796000"
	I0926 17:56:38.256324    3957 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-796000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-796000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:56:38.256623    3957 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:56:38.269370    3957 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0926 17:56:38.320076    3957 start.go:159] libmachine.API.Create for "force-systemd-env-796000" (driver="qemu2")
	I0926 17:56:38.320119    3957 client.go:168] LocalClient.Create starting
	I0926 17:56:38.320308    3957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:56:38.320367    3957 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:38.320385    3957 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:38.320446    3957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:56:38.320495    3957 main.go:141] libmachine: Decoding PEM data...
	I0926 17:56:38.320506    3957 main.go:141] libmachine: Parsing certificate...
	I0926 17:56:38.323726    3957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:56:38.510615    3957 main.go:141] libmachine: Creating SSH key...
	I0926 17:56:38.691631    3957 main.go:141] libmachine: Creating Disk image...
	I0926 17:56:38.691640    3957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:56:38.691842    3957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2
	I0926 17:56:38.701590    3957 main.go:141] libmachine: STDOUT: 
	I0926 17:56:38.701611    3957 main.go:141] libmachine: STDERR: 
	I0926 17:56:38.701692    3957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2 +20000M
	I0926 17:56:38.709744    3957 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:56:38.709762    3957 main.go:141] libmachine: STDERR: 
	I0926 17:56:38.709774    3957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2
	I0926 17:56:38.709784    3957 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:56:38.709792    3957 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:56:38.709817    3957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:95:d9:e1:a7:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/force-systemd-env-796000/disk.qcow2
	I0926 17:56:38.711451    3957 main.go:141] libmachine: STDOUT: 
	I0926 17:56:38.711467    3957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:56:38.711479    3957 client.go:171] duration metric: took 391.3645ms to LocalClient.Create
	I0926 17:56:40.713710    3957 start.go:128] duration metric: took 2.457105583s to createHost
	I0926 17:56:40.713797    3957 start.go:83] releasing machines lock for "force-systemd-env-796000", held for 2.457620416s
	W0926 17:56:40.714106    3957 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-796000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-796000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:56:40.738626    3957 out.go:201] 
	W0926 17:56:40.742520    3957 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:56:40.742541    3957 out.go:270] * 
	* 
	W0926 17:56:40.744590    3957 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:56:40.756519    3957 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-796000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-796000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-796000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.698ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-796000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-796000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-796000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-26 17:56:40.853597 -0700 PDT m=+2570.531343417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-796000 -n force-systemd-env-796000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-796000 -n force-systemd-env-796000: exit status 7 (35.671375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-796000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-796000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-796000
--- FAIL: TestForceSystemdEnv (11.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (38.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-449000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-449000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-cx44j" [b714dd83-e521-43f2-bf01-8423957925ff] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-cx44j" [b714dd83-e521-43f2-bf01-8423957925ff] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003996791s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31141
functional_test.go:1661: error fetching http://192.168.105.4:31141: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
I0926 17:36:11.813347    1597 retry.go:31] will retry after 715.378777ms: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31141: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
I0926 17:36:12.532337    1597 retry.go:31] will retry after 2.104976226s: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31141: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
I0926 17:36:14.640067    1597 retry.go:31] will retry after 2.352398877s: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31141: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
I0926 17:36:16.995746    1597 retry.go:31] will retry after 4.81998313s: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31141: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
I0926 17:36:21.819431    1597 retry.go:31] will retry after 2.885139969s: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31141: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
I0926 17:36:24.707279    1597 retry.go:31] will retry after 10.564444951s: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31141: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
I0926 17:36:35.274758    1597 retry.go:31] will retry after 6.409873992s: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31141: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31141: Get "http://192.168.105.4:31141": dial tcp 192.168.105.4:31141: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-449000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-cx44j
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-449000/192.168.105.4
Start Time:       Thu, 26 Sep 2024 17:36:03 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://a9e0951b52757b148fc5f7cfa29c14bb67a839f35a93ed57b484b9345fcf4fa8
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 26 Sep 2024 17:36:19 -0700
Finished:     Thu, 26 Sep 2024 17:36:19 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mgvcm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mgvcm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  38s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-cx44j to functional-449000
Normal   Pulled     22s (x3 over 37s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    22s (x3 over 37s)  kubelet            Created container echoserver-arm
Normal   Started    22s (x3 over 37s)  kubelet            Started container echoserver-arm
Warning  BackOff    8s (x3 over 35s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-cx44j_default(b714dd83-e521-43f2-bf01-8423957925ff)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-449000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-449000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.103.61
IPs:                      10.97.103.61
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31141/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-449000 -n functional-449000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-449000 ssh findmnt                                                                                        | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh findmnt                                                                                        | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh findmnt                                                                                        | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT | 26 Sep 24 17:36 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh -- ls                                                                                          | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT | 26 Sep 24 17:36 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh cat                                                                                            | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT | 26 Sep 24 17:36 PDT |
	|           | /mount-9p/test-1727397385944286000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh stat                                                                                           | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT | 26 Sep 24 17:36 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh stat                                                                                           | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT | 26 Sep 24 17:36 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh sudo                                                                                           | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT | 26 Sep 24 17:36 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh findmnt                                                                                        | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-449000                                                                                                 | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2957990606/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh findmnt                                                                                        | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT | 26 Sep 24 17:36 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh -- ls                                                                                          | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT | 26 Sep 24 17:36 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh sudo                                                                                           | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-449000                                                                                                 | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052538646/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-449000                                                                                                 | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052538646/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-449000                                                                                                 | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052538646/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh findmnt                                                                                        | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh findmnt                                                                                        | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT | 26 Sep 24 17:36 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh findmnt                                                                                        | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT | 26 Sep 24 17:36 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-449000 ssh findmnt                                                                                        | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT | 26 Sep 24 17:36 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-449000                                                                                                 | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-449000                                                                                                 | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-449000 --dry-run                                                                                       | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-449000                                                                                                 | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-449000 | jenkins | v1.34.0 | 26 Sep 24 17:36 PDT |                     |
	|           | -p functional-449000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 17:36:33
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 17:36:33.946324    2810 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:36:33.946437    2810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:36:33.946440    2810 out.go:358] Setting ErrFile to fd 2...
	I0926 17:36:33.946442    2810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:36:33.946567    2810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:36:33.947836    2810 out.go:352] Setting JSON to false
	I0926 17:36:33.965072    2810 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2156,"bootTime":1727395237,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:36:33.965166    2810 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:36:33.969585    2810 out.go:177] * [functional-449000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0926 17:36:33.976588    2810 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:36:33.976649    2810 notify.go:220] Checking for updates...
	I0926 17:36:33.983586    2810 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:36:33.986513    2810 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:36:33.989564    2810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:36:33.992466    2810 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:36:33.995525    2810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:36:33.998861    2810 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:36:33.999142    2810 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:36:34.003503    2810 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0926 17:36:34.010507    2810 start.go:297] selected driver: qemu2
	I0926 17:36:34.010514    2810 start.go:901] validating driver "qemu2" against &{Name:functional-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:36:34.010608    2810 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:36:34.016512    2810 out.go:201] 
	W0926 17:36:34.019537    2810 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 17:36:34.023525    2810 out.go:201] 
	
	
	==> Docker <==
	Sep 27 00:36:35 functional-449000 dockerd[5851]: time="2024-09-27T00:36:35.025890766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:36:35 functional-449000 dockerd[5851]: time="2024-09-27T00:36:35.025896307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:36:35 functional-449000 dockerd[5851]: time="2024-09-27T00:36:35.025944391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:36:35 functional-449000 cri-dockerd[6173]: time="2024-09-27T00:36:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7ca0884166ea6ff0316ee9e634a0181f0ee44f900507dde3ca5e08552a2df61/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 27 00:36:35 functional-449000 dockerd[5845]: time="2024-09-27T00:36:35.247978253Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" spanID=8bdfe63481b7eabf traceID=f8af7e33cb88fdce4c08d13d3d86b971
	Sep 27 00:36:36 functional-449000 cri-dockerd[6173]: time="2024-09-27T00:36:36Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.000343141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.000368891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.000377516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.000408600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:36:37 functional-449000 dockerd[5845]: time="2024-09-27T00:36:37.173277026Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" spanID=6ae04ea869bc9c5e traceID=99a7178587c7dd14730f15a0767b0706
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.583118870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.583184162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.583196787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.583232286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:36:37 functional-449000 dockerd[5845]: time="2024-09-27T00:36:37.606456696Z" level=info msg="ignoring event" container=6c22097319c87f4e0e223834a8a185dadf65d8f5a80c46a0771c5064fcfa5e4c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.606597362Z" level=info msg="shim disconnected" id=6c22097319c87f4e0e223834a8a185dadf65d8f5a80c46a0771c5064fcfa5e4c namespace=moby
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.606626362Z" level=warning msg="cleaning up after shim disconnected" id=6c22097319c87f4e0e223834a8a185dadf65d8f5a80c46a0771c5064fcfa5e4c namespace=moby
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.606630529Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 27 00:36:37 functional-449000 dockerd[5851]: time="2024-09-27T00:36:37.610894978Z" level=warning msg="cleanup warnings time=\"2024-09-27T00:36:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 27 00:36:41 functional-449000 cri-dockerd[6173]: time="2024-09-27T00:36:41Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 27 00:36:41 functional-449000 dockerd[5851]: time="2024-09-27T00:36:41.714463373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 00:36:41 functional-449000 dockerd[5851]: time="2024-09-27T00:36:41.714500539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 00:36:41 functional-449000 dockerd[5851]: time="2024-09-27T00:36:41.714512914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 00:36:41 functional-449000 dockerd[5851]: time="2024-09-27T00:36:41.714542081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	c46c196089c92       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         1 second ago         Running             kubernetes-dashboard        0                   a7ca0884166ea       kubernetes-dashboard-695b96c756-f2s4m
	6c22097319c87       72565bf5bbedf                                                                                          5 seconds ago        Exited              echoserver-arm              3                   9e056aaae142e       hello-node-64b4f8f9ff-7pxb2
	2147e9160ddc1       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   6 seconds ago        Running             dashboard-metrics-scraper   0                   e2d780d12e068       dashboard-metrics-scraper-c5db448b4-2xqjv
	594817606babe       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    13 seconds ago       Exited              mount-munger                0                   199781122b843       busybox-mount
	a9e0951b52757       72565bf5bbedf                                                                                          23 seconds ago       Exited              echoserver-arm              2                   56d0a528bc969       hello-node-connect-65d86f57f4-cx44j
	9bd31e938f607       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                          23 seconds ago       Running             myfrontend                  0                   dfcb93ec71c58       sp-pod
	05903aee5d5a3       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                          46 seconds ago       Running             nginx                       0                   09007fa51e662       nginx-svc
	c3c7cdb0b8df8       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         0                   d792f0f86da42       storage-provisioner
	dc59cdc42939d       2f6c962e7b831                                                                                          About a minute ago   Running             coredns                     0                   dca21dd8ab886       coredns-7c65d6cfc9-7mkln
	e293e74550e83       2f6c962e7b831                                                                                          About a minute ago   Running             coredns                     0                   23be0e50d0308       coredns-7c65d6cfc9-8b49p
	43400c4c30b83       24a140c548c07                                                                                          About a minute ago   Running             kube-proxy                  0                   8f6e37a3ca953       kube-proxy-4bx9b
	07bcdfbaa94f8       7f8aa378bb47d                                                                                          About a minute ago   Running             kube-scheduler              0                   601e3594c116f       kube-scheduler-functional-449000
	4b1a6718d7ade       d3f53a98c0a9d                                                                                          About a minute ago   Running             kube-apiserver              0                   8522e73090d49       kube-apiserver-functional-449000
	4a444bedda96b       279f381cb3736                                                                                          About a minute ago   Running             kube-controller-manager     0                   f7aa1dfc5c57e       kube-controller-manager-functional-449000
	51661178bf329       27e3830e14027                                                                                          About a minute ago   Running             etcd                        0                   26510e9a244ed       etcd-functional-449000
	
	
	==> coredns [dc59cdc42939] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	
	
	==> coredns [e293e74550e8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               functional-449000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-449000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=functional-449000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_26T17_35_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:35:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-449000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:36:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:36:26 +0000   Fri, 27 Sep 2024 00:35:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:36:26 +0000   Fri, 27 Sep 2024 00:35:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:36:26 +0000   Fri, 27 Sep 2024 00:35:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:36:26 +0000   Fri, 27 Sep 2024 00:35:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-449000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 9b1745e912c64661bba6575d54c7cedf
	  System UUID:                9b1745e912c64661bba6575d54c7cedf
	  Boot ID:                    62a6c9c2-7f0f-4a32-b30c-a77ec42f0b64
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-7pxb2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  default                     hello-node-connect-65d86f57f4-cx44j          0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 coredns-7c65d6cfc9-7mkln                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     72s
	  kube-system                 coredns-7c65d6cfc9-8b49p                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     72s
	  kube-system                 etcd-functional-449000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         78s
	  kube-system                 kube-apiserver-functional-449000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-functional-449000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-4bx9b                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-scheduler-functional-449000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-2xqjv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-f2s4m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (6%)  340Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 71s   kube-proxy       
	  Normal  Starting                 78s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  78s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s   kubelet          Node functional-449000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s   kubelet          Node functional-449000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s   kubelet          Node functional-449000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           73s   node-controller  Node functional-449000 event: Registered Node functional-449000 in Controller
	
	
	==> dmesg <==
	[  +0.090348] systemd-fstab-generator[5407]: Ignoring "noauto" option for root device
	[  +0.094848] systemd-fstab-generator[5419]: Ignoring "noauto" option for root device
	[  +0.094872] systemd-fstab-generator[5433]: Ignoring "noauto" option for root device
	[  +5.124360] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.340093] systemd-fstab-generator[6053]: Ignoring "noauto" option for root device
	[  +0.089625] systemd-fstab-generator[6065]: Ignoring "noauto" option for root device
	[  +0.077025] systemd-fstab-generator[6077]: Ignoring "noauto" option for root device
	[  +0.110441] systemd-fstab-generator[6165]: Ignoring "noauto" option for root device
	[  +0.214987] systemd-fstab-generator[6333]: Ignoring "noauto" option for root device
	[  +0.963040] systemd-fstab-generator[6456]: Ignoring "noauto" option for root device
	[  +1.287622] kauditd_printk_skb: 184 callbacks suppressed
	[Sep27 00:35] systemd-fstab-generator[17952]: Ignoring "noauto" option for root device
	[  +4.011921] systemd-fstab-generator[18362]: Ignoring "noauto" option for root device
	[  +0.052140] kauditd_printk_skb: 59 callbacks suppressed
	[  +6.102271] systemd-fstab-generator[18487]: Ignoring "noauto" option for root device
	[  +0.043592] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.199260] kauditd_printk_skb: 72 callbacks suppressed
	[ +10.635092] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.401095] kauditd_printk_skb: 10 callbacks suppressed
	[Sep27 00:36] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.940809] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.623281] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.556055] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.730865] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.073053] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [51661178bf32] <==
	{"level":"info","ts":"2024-09-27T00:35:21.622353Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T00:35:21.622378Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T00:35:21.622382Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T00:35:21.622896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-09-27T00:35:21.622947Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-27T00:35:22.013380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-27T00:35:22.013497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-27T00:35:22.013534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 1"}
	{"level":"info","ts":"2024-09-27T00:35:22.013560Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 2"}
	{"level":"info","ts":"2024-09-27T00:35:22.013605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-27T00:35:22.013636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 2"}
	{"level":"info","ts":"2024-09-27T00:35:22.013670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-27T00:35:22.021502Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T00:35:22.021694Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:35:22.021483Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-449000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T00:35:22.022209Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:35:22.022705Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T00:35:22.022844Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-27T00:35:22.023029Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:35:22.023080Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:35:22.023106Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:35:22.023515Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:35:22.023954Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T00:35:22.029608Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T00:35:22.029636Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:36:42 up 7 min,  0 users,  load average: 0.54, 0.39, 0.21
	Linux functional-449000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4b1a6718d7ad] <==
	I0927 00:35:22.751293       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 00:35:22.751295       1 cache.go:39] Caches are synced for autoregister controller
	I0927 00:35:22.751314       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 00:35:22.780576       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 00:35:23.663226       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0927 00:35:23.669992       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0927 00:35:23.670094       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 00:35:23.835801       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 00:35:23.847070       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 00:35:23.955557       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0927 00:35:23.957591       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0927 00:35:23.957969       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 00:35:23.959317       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 00:35:24.676315       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 00:35:24.709078       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 00:35:24.712878       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0927 00:35:24.717252       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 00:35:30.179950       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0927 00:35:30.330344       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0927 00:35:42.710831       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.96.34"}
	I0927 00:35:48.388505       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.245.192"}
	I0927 00:35:53.351280       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.113.46"}
	I0927 00:36:03.797037       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.103.61"}
	I0927 00:36:34.646825       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.186.239"}
	I0927 00:36:34.656922       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.56.121"}
	
	
	==> kube-controller-manager [4a444bedda96] <==
	E0927 00:36:34.572303       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0927 00:36:34.585638       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="18.151717ms"
	E0927 00:36:34.585762       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0927 00:36:34.586332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="12.924727ms"
	E0927 00:36:34.586358       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0927 00:36:34.594066       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.992237ms"
	E0927 00:36:34.594115       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0927 00:36:34.603079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.399767ms"
	E0927 00:36:34.603148       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0927 00:36:34.604901       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.47657ms"
	E0927 00:36:34.605039       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0927 00:36:34.613251       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.987449ms"
	E0927 00:36:34.613306       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0927 00:36:34.613883       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.052567ms"
	E0927 00:36:34.614250       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0927 00:36:34.633111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.407695ms"
	I0927 00:36:34.641674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="8.532151ms"
	I0927 00:36:34.641704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="12.167µs"
	I0927 00:36:34.665546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.226106ms"
	I0927 00:36:34.676643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.042063ms"
	I0927 00:36:34.711279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="34.58448ms"
	I0927 00:36:34.711348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="48.541µs"
	I0927 00:36:37.498246       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.619736ms"
	I0927 00:36:37.498505       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="31.708µs"
	I0927 00:36:38.499835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="23.5µs"
	
	
	==> kube-proxy [43400c4c30b8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 00:35:30.770273       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 00:35:30.773873       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0927 00:35:30.773893       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:35:30.781444       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 00:35:30.781459       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 00:35:30.781471       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:35:30.782056       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:35:30.782137       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:35:30.782142       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:35:30.783205       1 config.go:199] "Starting service config controller"
	I0927 00:35:30.783212       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:35:30.783221       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:35:30.783223       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:35:30.784511       1 config.go:328] "Starting node config controller"
	I0927 00:35:30.784514       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:35:30.883987       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:35:30.884044       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:35:30.884819       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [07bcdfbaa94f] <==
	W0927 00:35:22.705312       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:35:22.705363       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:35:22.705384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0927 00:35:22.705422       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:35:22.705432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0927 00:35:22.705424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:35:22.705505       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 00:35:22.705526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:35:22.705544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:35:22.705553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:35:22.705569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:35:22.705579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:35:23.532850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:35:23.532974       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:35:23.590137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:35:23.590202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:35:23.606415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 00:35:23.606648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:35:23.661783       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:35:23.662300       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:35:23.671923       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:35:23.672236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:35:23.794118       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:35:23.794239       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0927 00:35:25.502570       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:36:24 functional-449000 kubelet[18369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 00:36:24 functional-449000 kubelet[18369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 00:36:24 functional-449000 kubelet[18369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 00:36:27 functional-449000 kubelet[18369]: I0927 00:36:27.660984   18369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/a4a920ac-5679-4e9f-8669-8f1209d98630-test-volume\") pod \"busybox-mount\" (UID: \"a4a920ac-5679-4e9f-8669-8f1209d98630\") " pod="default/busybox-mount"
	Sep 27 00:36:27 functional-449000 kubelet[18369]: I0927 00:36:27.661025   18369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wwtk\" (UniqueName: \"kubernetes.io/projected/a4a920ac-5679-4e9f-8669-8f1209d98630-kube-api-access-6wwtk\") pod \"busybox-mount\" (UID: \"a4a920ac-5679-4e9f-8669-8f1209d98630\") " pod="default/busybox-mount"
	Sep 27 00:36:31 functional-449000 kubelet[18369]: I0927 00:36:31.594697   18369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/a4a920ac-5679-4e9f-8669-8f1209d98630-test-volume\") pod \"a4a920ac-5679-4e9f-8669-8f1209d98630\" (UID: \"a4a920ac-5679-4e9f-8669-8f1209d98630\") "
	Sep 27 00:36:31 functional-449000 kubelet[18369]: I0927 00:36:31.594725   18369 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wwtk\" (UniqueName: \"kubernetes.io/projected/a4a920ac-5679-4e9f-8669-8f1209d98630-kube-api-access-6wwtk\") pod \"a4a920ac-5679-4e9f-8669-8f1209d98630\" (UID: \"a4a920ac-5679-4e9f-8669-8f1209d98630\") "
	Sep 27 00:36:31 functional-449000 kubelet[18369]: I0927 00:36:31.594916   18369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4a920ac-5679-4e9f-8669-8f1209d98630-test-volume" (OuterVolumeSpecName: "test-volume") pod "a4a920ac-5679-4e9f-8669-8f1209d98630" (UID: "a4a920ac-5679-4e9f-8669-8f1209d98630"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 00:36:31 functional-449000 kubelet[18369]: I0927 00:36:31.597767   18369 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4a920ac-5679-4e9f-8669-8f1209d98630-kube-api-access-6wwtk" (OuterVolumeSpecName: "kube-api-access-6wwtk") pod "a4a920ac-5679-4e9f-8669-8f1209d98630" (UID: "a4a920ac-5679-4e9f-8669-8f1209d98630"). InnerVolumeSpecName "kube-api-access-6wwtk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:36:31 functional-449000 kubelet[18369]: I0927 00:36:31.695289   18369 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/a4a920ac-5679-4e9f-8669-8f1209d98630-test-volume\") on node \"functional-449000\" DevicePath \"\""
	Sep 27 00:36:31 functional-449000 kubelet[18369]: I0927 00:36:31.695310   18369 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6wwtk\" (UniqueName: \"kubernetes.io/projected/a4a920ac-5679-4e9f-8669-8f1209d98630-kube-api-access-6wwtk\") on node \"functional-449000\" DevicePath \"\""
	Sep 27 00:36:32 functional-449000 kubelet[18369]: I0927 00:36:32.400657   18369 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="199781122b84388ca5116c2285f9f928b9ce7c4a746049bad98d1ed388b49361"
	Sep 27 00:36:33 functional-449000 kubelet[18369]: I0927 00:36:33.562228   18369 scope.go:117] "RemoveContainer" containerID="a9e0951b52757b148fc5f7cfa29c14bb67a839f35a93ed57b484b9345fcf4fa8"
	Sep 27 00:36:33 functional-449000 kubelet[18369]: E0927 00:36:33.562694   18369 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-cx44j_default(b714dd83-e521-43f2-bf01-8423957925ff)\"" pod="default/hello-node-connect-65d86f57f4-cx44j" podUID="b714dd83-e521-43f2-bf01-8423957925ff"
	Sep 27 00:36:34 functional-449000 kubelet[18369]: E0927 00:36:34.630034   18369 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a4a920ac-5679-4e9f-8669-8f1209d98630" containerName="mount-munger"
	Sep 27 00:36:34 functional-449000 kubelet[18369]: I0927 00:36:34.630065   18369 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4a920ac-5679-4e9f-8669-8f1209d98630" containerName="mount-munger"
	Sep 27 00:36:34 functional-449000 kubelet[18369]: I0927 00:36:34.722351   18369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gppdt\" (UniqueName: \"kubernetes.io/projected/c1bf8ec7-2373-40af-9c9d-1968370156df-kube-api-access-gppdt\") pod \"dashboard-metrics-scraper-c5db448b4-2xqjv\" (UID: \"c1bf8ec7-2373-40af-9c9d-1968370156df\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-2xqjv"
	Sep 27 00:36:34 functional-449000 kubelet[18369]: I0927 00:36:34.722435   18369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c1bf8ec7-2373-40af-9c9d-1968370156df-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-2xqjv\" (UID: \"c1bf8ec7-2373-40af-9c9d-1968370156df\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-2xqjv"
	Sep 27 00:36:34 functional-449000 kubelet[18369]: I0927 00:36:34.822679   18369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/96000a07-4613-4dfd-9b89-16d468cc6af1-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-f2s4m\" (UID: \"96000a07-4613-4dfd-9b89-16d468cc6af1\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-f2s4m"
	Sep 27 00:36:34 functional-449000 kubelet[18369]: I0927 00:36:34.822712   18369 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbdvc\" (UniqueName: \"kubernetes.io/projected/96000a07-4613-4dfd-9b89-16d468cc6af1-kube-api-access-xbdvc\") pod \"kubernetes-dashboard-695b96c756-f2s4m\" (UID: \"96000a07-4613-4dfd-9b89-16d468cc6af1\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-f2s4m"
	Sep 27 00:36:37 functional-449000 kubelet[18369]: I0927 00:36:37.561629   18369 scope.go:117] "RemoveContainer" containerID="0cf8d3bd4448b6c3e6a99ac0833e6302223d5e4d23456b21696af7a31994e240"
	Sep 27 00:36:38 functional-449000 kubelet[18369]: I0927 00:36:38.492542   18369 scope.go:117] "RemoveContainer" containerID="0cf8d3bd4448b6c3e6a99ac0833e6302223d5e4d23456b21696af7a31994e240"
	Sep 27 00:36:38 functional-449000 kubelet[18369]: I0927 00:36:38.492638   18369 scope.go:117] "RemoveContainer" containerID="6c22097319c87f4e0e223834a8a185dadf65d8f5a80c46a0771c5064fcfa5e4c"
	Sep 27 00:36:38 functional-449000 kubelet[18369]: E0927 00:36:38.492685   18369 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-7pxb2_default(846ee69c-0054-4bce-8865-e661ea8cb517)\"" pod="default/hello-node-64b4f8f9ff-7pxb2" podUID="846ee69c-0054-4bce-8865-e661ea8cb517"
	Sep 27 00:36:38 functional-449000 kubelet[18369]: I0927 00:36:38.500274   18369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-2xqjv" podStartSLOduration=2.574140615 podStartE2EDuration="4.500262801s" podCreationTimestamp="2024-09-27 00:36:34 +0000 UTC" firstStartedPulling="2024-09-27 00:36:35.033538544 +0000 UTC m=+70.525561142" lastFinishedPulling="2024-09-27 00:36:36.959660688 +0000 UTC m=+72.451683328" observedRunningTime="2024-09-27 00:36:37.493116852 +0000 UTC m=+72.985139491" watchObservedRunningTime="2024-09-27 00:36:38.500262801 +0000 UTC m=+73.992285399"
	
	
	==> kubernetes-dashboard [c46c196089c9] <==
	2024/09/27 00:36:41 Using namespace: kubernetes-dashboard
	2024/09/27 00:36:41 Using in-cluster config to connect to apiserver
	2024/09/27 00:36:41 Using secret token for csrf signing
	2024/09/27 00:36:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/27 00:36:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/27 00:36:41 Successful initial request to the apiserver, version: v1.31.1
	2024/09/27 00:36:41 Generating JWE encryption key
	2024/09/27 00:36:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/27 00:36:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/27 00:36:41 Initializing JWE encryption key from synchronized object
	2024/09/27 00:36:41 Creating in-cluster Sidecar client
	2024/09/27 00:36:41 Successful request to sidecar
	2024/09/27 00:36:41 Serving insecurely on HTTP port: 9090
	2024/09/27 00:36:41 Starting overwatch
	
	
	==> storage-provisioner [c3c7cdb0b8df] <==
	I0927 00:35:31.278034       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:35:31.281832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:35:31.281850       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:35:31.285202       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:35:31.285311       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-449000_eeb7881d-0fc8-47e2-bb57-b0777d1dcd51!
	I0927 00:35:31.285700       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bc96816e-f568-456f-86b1-c3588c553f36", APIVersion:"v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-449000_eeb7881d-0fc8-47e2-bb57-b0777d1dcd51 became leader
	I0927 00:35:31.385490       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-449000_eeb7881d-0fc8-47e2-bb57-b0777d1dcd51!
	I0927 00:36:06.066489       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0927 00:36:06.066830       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9dcc7e3c-68b8-421a-96ab-048c9542cee8", APIVersion:"v1", ResourceVersion:"526", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0927 00:36:06.066594       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    ece755b6-87f2-40bc-9e17-7cda44a541f2 336 0 2024-09-27 00:35:30 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-27 00:35:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-9dcc7e3c-68b8-421a-96ab-048c9542cee8 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  9dcc7e3c-68b8-421a-96ab-048c9542cee8 526 0 2024-09-27 00:36:06 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-27 00:36:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-27 00:36:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0927 00:36:06.067236       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-9dcc7e3c-68b8-421a-96ab-048c9542cee8" provisioned
	I0927 00:36:06.067314       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0927 00:36:06.067352       1 volume_store.go:212] Trying to save persistentvolume "pvc-9dcc7e3c-68b8-421a-96ab-048c9542cee8"
	I0927 00:36:06.072606       1 volume_store.go:219] persistentvolume "pvc-9dcc7e3c-68b8-421a-96ab-048c9542cee8" saved
	I0927 00:36:06.073087       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"9dcc7e3c-68b8-421a-96ab-048c9542cee8", APIVersion:"v1", ResourceVersion:"526", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-9dcc7e3c-68b8-421a-96ab-048c9542cee8
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-449000 -n functional-449000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-449000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-449000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-449000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-449000/192.168.105.4
	Start Time:       Thu, 26 Sep 2024 17:36:27 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://594817606babed703f334f8e4dd4bd7a2e8ba6b2576160714e135ffd371b0e82
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 26 Sep 2024 17:36:29 -0700
	      Finished:     Thu, 26 Sep 2024 17:36:29 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6wwtk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-6wwtk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  15s   default-scheduler  Successfully assigned default/busybox-mount to functional-449000
	  Normal  Pulling    15s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     13s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.592s (1.592s including waiting). Image size: 3547125 bytes.
	  Normal  Created    13s   kubelet            Created container mount-munger
	  Normal  Started    13s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (38.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (64.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 node stop m02 -v=7 --alsologtostderr
E0926 17:40:48.938855    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:40:49.582422    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:40:50.865469    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:40:53.428806    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:40:58.552130    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-380000 node stop m02 -v=7 --alsologtostderr: (12.185930792s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr
E0926 17:41:08.793443    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr: (25.958097708s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000
E0926 17:41:29.276407    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:41:35.082285    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000: exit status 3 (25.9733705s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 17:41:52.968280    3144 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0926 17:41:52.968288    3144 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-380000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (64.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0926 17:42:10.238772    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (25.974608917s)
ha_test.go:413: expected profile "ha-380000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-380000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-380000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-380000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":
false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\
"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000: exit status 3 (25.961682375s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 17:42:44.901970    3171 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0926 17:42:44.901989    3171 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-380000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (87.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-380000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.095728834s)

                                                
                                                
-- stdout --
	* Starting "ha-380000-m02" control-plane node in "ha-380000" cluster
	* Restarting existing qemu2 VM for "ha-380000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-380000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:42:44.950403    3183 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:42:44.950690    3183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:42:44.950697    3183 out.go:358] Setting ErrFile to fd 2...
	I0926 17:42:44.950700    3183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:42:44.950851    3183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:42:44.951141    3183 mustload.go:65] Loading cluster: ha-380000
	I0926 17:42:44.951410    3183 config.go:182] Loaded profile config "ha-380000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0926 17:42:44.951667    3183 host.go:58] "ha-380000-m02" host status: Stopped
	I0926 17:42:44.955882    3183 out.go:177] * Starting "ha-380000-m02" control-plane node in "ha-380000" cluster
	I0926 17:42:44.958805    3183 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:42:44.958819    3183 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:42:44.958827    3183 cache.go:56] Caching tarball of preloaded images
	I0926 17:42:44.958899    3183 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:42:44.958906    3183 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:42:44.958965    3183 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/ha-380000/config.json ...
	I0926 17:42:44.959447    3183 start.go:360] acquireMachinesLock for ha-380000-m02: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:42:44.959505    3183 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "ha-380000-m02"
	I0926 17:42:44.959512    3183 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:42:44.959518    3183 fix.go:54] fixHost starting: m02
	I0926 17:42:44.959619    3183 fix.go:112] recreateIfNeeded on ha-380000-m02: state=Stopped err=<nil>
	W0926 17:42:44.959624    3183 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:42:44.963889    3183 out.go:177] * Restarting existing qemu2 VM for "ha-380000-m02" ...
	I0926 17:42:44.966838    3183 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:42:44.966883    3183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:47:a7:aa:df:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/disk.qcow2
	I0926 17:42:44.969232    3183 main.go:141] libmachine: STDOUT: 
	I0926 17:42:44.969247    3183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:42:44.969275    3183 fix.go:56] duration metric: took 9.756875ms for fixHost
	I0926 17:42:44.969280    3183 start.go:83] releasing machines lock for "ha-380000-m02", held for 9.769292ms
	W0926 17:42:44.969286    3183 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:42:44.969309    3183 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:42:44.969314    3183 start.go:729] Will try again in 5 seconds ...
	I0926 17:42:49.971228    3183 start.go:360] acquireMachinesLock for ha-380000-m02: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:42:49.971343    3183 start.go:364] duration metric: took 93µs to acquireMachinesLock for "ha-380000-m02"
	I0926 17:42:49.971383    3183 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:42:49.971387    3183 fix.go:54] fixHost starting: m02
	I0926 17:42:49.971532    3183 fix.go:112] recreateIfNeeded on ha-380000-m02: state=Stopped err=<nil>
	W0926 17:42:49.971537    3183 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:42:49.975466    3183 out.go:177] * Restarting existing qemu2 VM for "ha-380000-m02" ...
	I0926 17:42:49.979506    3183 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:42:49.979545    3183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:47:a7:aa:df:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/disk.qcow2
	I0926 17:42:49.981653    3183 main.go:141] libmachine: STDOUT: 
	I0926 17:42:49.981671    3183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:42:49.981690    3183 fix.go:56] duration metric: took 10.302917ms for fixHost
	I0926 17:42:49.981694    3183 start.go:83] releasing machines lock for "ha-380000-m02", held for 10.347166ms
	W0926 17:42:49.981739    3183 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-380000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-380000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:42:49.985485    3183 out.go:201] 
	W0926 17:42:49.989483    3183 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:42:49.989488    3183 out.go:270] * 
	* 
	W0926 17:42:49.991234    3183 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:42:49.995352    3183 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0926 17:42:44.950403    3183 out.go:345] Setting OutFile to fd 1 ...
I0926 17:42:44.950690    3183 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:42:44.950697    3183 out.go:358] Setting ErrFile to fd 2...
I0926 17:42:44.950700    3183 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:42:44.950851    3183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
I0926 17:42:44.951141    3183 mustload.go:65] Loading cluster: ha-380000
I0926 17:42:44.951410    3183 config.go:182] Loaded profile config "ha-380000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0926 17:42:44.951667    3183 host.go:58] "ha-380000-m02" host status: Stopped
I0926 17:42:44.955882    3183 out.go:177] * Starting "ha-380000-m02" control-plane node in "ha-380000" cluster
I0926 17:42:44.958805    3183 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0926 17:42:44.958819    3183 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0926 17:42:44.958827    3183 cache.go:56] Caching tarball of preloaded images
I0926 17:42:44.958899    3183 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0926 17:42:44.958906    3183 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0926 17:42:44.958965    3183 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/ha-380000/config.json ...
I0926 17:42:44.959447    3183 start.go:360] acquireMachinesLock for ha-380000-m02: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0926 17:42:44.959505    3183 start.go:364] duration metric: took 26.792µs to acquireMachinesLock for "ha-380000-m02"
I0926 17:42:44.959512    3183 start.go:96] Skipping create...Using existing machine configuration
I0926 17:42:44.959518    3183 fix.go:54] fixHost starting: m02
I0926 17:42:44.959619    3183 fix.go:112] recreateIfNeeded on ha-380000-m02: state=Stopped err=<nil>
W0926 17:42:44.959624    3183 fix.go:138] unexpected machine state, will restart: <nil>
I0926 17:42:44.963889    3183 out.go:177] * Restarting existing qemu2 VM for "ha-380000-m02" ...
I0926 17:42:44.966838    3183 qemu.go:418] Using hvf for hardware acceleration
I0926 17:42:44.966883    3183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:47:a7:aa:df:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/disk.qcow2
I0926 17:42:44.969232    3183 main.go:141] libmachine: STDOUT: 
I0926 17:42:44.969247    3183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0926 17:42:44.969275    3183 fix.go:56] duration metric: took 9.756875ms for fixHost
I0926 17:42:44.969280    3183 start.go:83] releasing machines lock for "ha-380000-m02", held for 9.769292ms
W0926 17:42:44.969286    3183 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0926 17:42:44.969309    3183 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0926 17:42:44.969314    3183 start.go:729] Will try again in 5 seconds ...
I0926 17:42:49.971228    3183 start.go:360] acquireMachinesLock for ha-380000-m02: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0926 17:42:49.971343    3183 start.go:364] duration metric: took 93µs to acquireMachinesLock for "ha-380000-m02"
I0926 17:42:49.971383    3183 start.go:96] Skipping create...Using existing machine configuration
I0926 17:42:49.971387    3183 fix.go:54] fixHost starting: m02
I0926 17:42:49.971532    3183 fix.go:112] recreateIfNeeded on ha-380000-m02: state=Stopped err=<nil>
W0926 17:42:49.971537    3183 fix.go:138] unexpected machine state, will restart: <nil>
I0926 17:42:49.975466    3183 out.go:177] * Restarting existing qemu2 VM for "ha-380000-m02" ...
I0926 17:42:49.979506    3183 qemu.go:418] Using hvf for hardware acceleration
I0926 17:42:49.979545    3183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:47:a7:aa:df:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000-m02/disk.qcow2
I0926 17:42:49.981653    3183 main.go:141] libmachine: STDOUT: 
I0926 17:42:49.981671    3183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0926 17:42:49.981690    3183 fix.go:56] duration metric: took 10.302917ms for fixHost
I0926 17:42:49.981694    3183 start.go:83] releasing machines lock for "ha-380000-m02", held for 10.347166ms
W0926 17:42:49.981739    3183 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-380000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-380000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0926 17:42:49.985485    3183 out.go:201] 
W0926 17:42:49.989483    3183 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0926 17:42:49.989488    3183 out.go:270] * 
* 
W0926 17:42:49.991234    3183 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0926 17:42:49.995352    3183 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-380000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr: (25.960854667s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
E0926 17:43:32.159706    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:448: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (30.036527083s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: i/o timeout

                                                
                                                
** /stderr **
ha_test.go:450: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000: exit status 3 (25.957660167s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0926 17:44:11.952210    3207 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0926 17:44:11.952219    3207 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-380000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (87.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-380000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-380000 -v=7 --alsologtostderr
E0926 17:45:48.277576    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:46:15.997997    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:46:35.072786    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:47:58.171900    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-380000 -v=7 --alsologtostderr: (3m49.001603125s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-380000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-380000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.219884417s)

                                                
                                                
-- stdout --
	* [ha-380000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-380000" primary control-plane node in "ha-380000" cluster
	* Restarting existing qemu2 VM for "ha-380000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-380000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:48:03.051603    3247 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:48:03.051767    3247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:48:03.051771    3247 out.go:358] Setting ErrFile to fd 2...
	I0926 17:48:03.051774    3247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:48:03.051943    3247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:48:03.053271    3247 out.go:352] Setting JSON to false
	I0926 17:48:03.073928    3247 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2846,"bootTime":1727395237,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:48:03.074002    3247 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:48:03.078285    3247 out.go:177] * [ha-380000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:48:03.085254    3247 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:48:03.085288    3247 notify.go:220] Checking for updates...
	I0926 17:48:03.091202    3247 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:48:03.094215    3247 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:48:03.097242    3247 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:48:03.100208    3247 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:48:03.103196    3247 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:48:03.106610    3247 config.go:182] Loaded profile config "ha-380000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:48:03.106661    3247 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:48:03.111100    3247 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 17:48:03.118234    3247 start.go:297] selected driver: qemu2
	I0926 17:48:03.118243    3247 start.go:901] validating driver "qemu2" against &{Name:ha-380000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-380000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:48:03.118324    3247 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:48:03.120974    3247 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:48:03.120994    3247 cni.go:84] Creating CNI manager for ""
	I0926 17:48:03.121020    3247 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:48:03.121071    3247 start.go:340] cluster config:
	{Name:ha-380000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-380000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:48:03.125245    3247 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:48:03.133259    3247 out.go:177] * Starting "ha-380000" primary control-plane node in "ha-380000" cluster
	I0926 17:48:03.136198    3247 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:48:03.136216    3247 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:48:03.136228    3247 cache.go:56] Caching tarball of preloaded images
	I0926 17:48:03.136291    3247 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:48:03.136297    3247 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:48:03.136370    3247 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/ha-380000/config.json ...
	I0926 17:48:03.136811    3247 start.go:360] acquireMachinesLock for ha-380000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:48:03.136845    3247 start.go:364] duration metric: took 28.834µs to acquireMachinesLock for "ha-380000"
	I0926 17:48:03.136854    3247 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:48:03.136859    3247 fix.go:54] fixHost starting: 
	I0926 17:48:03.136979    3247 fix.go:112] recreateIfNeeded on ha-380000: state=Stopped err=<nil>
	W0926 17:48:03.136988    3247 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:48:03.140263    3247 out.go:177] * Restarting existing qemu2 VM for "ha-380000" ...
	I0926 17:48:03.147260    3247 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:48:03.147305    3247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:32:6c:62:79:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/disk.qcow2
	I0926 17:48:03.149296    3247 main.go:141] libmachine: STDOUT: 
	I0926 17:48:03.149319    3247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:48:03.149351    3247 fix.go:56] duration metric: took 12.483166ms for fixHost
	I0926 17:48:03.149357    3247 start.go:83] releasing machines lock for "ha-380000", held for 12.499292ms
	W0926 17:48:03.149363    3247 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:48:03.149396    3247 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:48:03.149401    3247 start.go:729] Will try again in 5 seconds ...
	I0926 17:48:08.154145    3247 start.go:360] acquireMachinesLock for ha-380000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:48:08.154546    3247 start.go:364] duration metric: took 301.417µs to acquireMachinesLock for "ha-380000"
	I0926 17:48:08.154665    3247 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:48:08.154687    3247 fix.go:54] fixHost starting: 
	I0926 17:48:08.155355    3247 fix.go:112] recreateIfNeeded on ha-380000: state=Stopped err=<nil>
	W0926 17:48:08.155378    3247 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:48:08.159619    3247 out.go:177] * Restarting existing qemu2 VM for "ha-380000" ...
	I0926 17:48:08.167740    3247 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:48:08.168001    3247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:32:6c:62:79:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/disk.qcow2
	I0926 17:48:08.176752    3247 main.go:141] libmachine: STDOUT: 
	I0926 17:48:08.176830    3247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:48:08.176890    3247 fix.go:56] duration metric: took 22.1955ms for fixHost
	I0926 17:48:08.176910    3247 start.go:83] releasing machines lock for "ha-380000", held for 22.336875ms
	W0926 17:48:08.177048    3247 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-380000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-380000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:48:08.184678    3247 out.go:201] 
	W0926 17:48:08.188761    3247 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:48:08.188794    3247 out.go:270] * 
	* 
	W0926 17:48:08.191737    3247 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:48:08.203735    3247 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-380000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-380000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000: exit status 7 (33.15375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-380000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-380000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.156208ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-380000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-380000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:48:08.345309    3260 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:48:08.345534    3260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:48:08.345538    3260 out.go:358] Setting ErrFile to fd 2...
	I0926 17:48:08.345540    3260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:48:08.345685    3260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:48:08.345910    3260 mustload.go:65] Loading cluster: ha-380000
	I0926 17:48:08.346160    3260 config.go:182] Loaded profile config "ha-380000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0926 17:48:08.346456    3260 out.go:270] ! The control-plane node ha-380000 host is not running (will try others): state=Stopped
	! The control-plane node ha-380000 host is not running (will try others): state=Stopped
	W0926 17:48:08.346560    3260 out.go:270] ! The control-plane node ha-380000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-380000-m02 host is not running (will try others): state=Stopped
	I0926 17:48:08.351374    3260 out.go:177] * The control-plane node ha-380000-m03 host is not running: state=Stopped
	I0926 17:48:08.352530    3260 out.go:177]   To start a cluster, run: "minikube start -p ha-380000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-380000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr: exit status 7 (30.551333ms)

                                                
                                                
-- stdout --
	ha-380000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-380000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-380000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-380000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:48:08.384587    3262 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:48:08.384746    3262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:48:08.384749    3262 out.go:358] Setting ErrFile to fd 2...
	I0926 17:48:08.384752    3262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:48:08.384900    3262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:48:08.385025    3262 out.go:352] Setting JSON to false
	I0926 17:48:08.385035    3262 mustload.go:65] Loading cluster: ha-380000
	I0926 17:48:08.385107    3262 notify.go:220] Checking for updates...
	I0926 17:48:08.385286    3262 config.go:182] Loaded profile config "ha-380000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:48:08.385293    3262 status.go:174] checking status of ha-380000 ...
	I0926 17:48:08.385534    3262 status.go:364] ha-380000 host status = "Stopped" (err=<nil>)
	I0926 17:48:08.385538    3262 status.go:377] host is not running, skipping remaining checks
	I0926 17:48:08.385540    3262 status.go:176] ha-380000 status: &{Name:ha-380000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:48:08.385550    3262 status.go:174] checking status of ha-380000-m02 ...
	I0926 17:48:08.385652    3262 status.go:364] ha-380000-m02 host status = "Stopped" (err=<nil>)
	I0926 17:48:08.385655    3262 status.go:377] host is not running, skipping remaining checks
	I0926 17:48:08.385657    3262 status.go:176] ha-380000-m02 status: &{Name:ha-380000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:48:08.385660    3262 status.go:174] checking status of ha-380000-m03 ...
	I0926 17:48:08.385752    3262 status.go:364] ha-380000-m03 host status = "Stopped" (err=<nil>)
	I0926 17:48:08.385755    3262 status.go:377] host is not running, skipping remaining checks
	I0926 17:48:08.385757    3262 status.go:176] ha-380000-m03 status: &{Name:ha-380000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:48:08.385760    3262 status.go:174] checking status of ha-380000-m04 ...
	I0926 17:48:08.385854    3262 status.go:364] ha-380000-m04 host status = "Stopped" (err=<nil>)
	I0926 17:48:08.385857    3262 status.go:377] host is not running, skipping remaining checks
	I0926 17:48:08.385858    3262 status.go:176] ha-380000-m04 status: &{Name:ha-380000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000: exit status 7 (30.055292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-380000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-380000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-380000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-380000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-380000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000: exit status 7 (29.227125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-380000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 stop -v=7 --alsologtostderr
E0926 17:50:48.286004    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-380000 stop -v=7 --alsologtostderr: (3m21.977003833s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr: exit status 7 (65.554792ms)

                                                
                                                
-- stdout --
	ha-380000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-380000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-380000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-380000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:51:30.534892    3300 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:51:30.535096    3300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:51:30.535101    3300 out.go:358] Setting ErrFile to fd 2...
	I0926 17:51:30.535104    3300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:51:30.535262    3300 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:51:30.535417    3300 out.go:352] Setting JSON to false
	I0926 17:51:30.535430    3300 mustload.go:65] Loading cluster: ha-380000
	I0926 17:51:30.535469    3300 notify.go:220] Checking for updates...
	I0926 17:51:30.535735    3300 config.go:182] Loaded profile config "ha-380000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:51:30.535745    3300 status.go:174] checking status of ha-380000 ...
	I0926 17:51:30.536048    3300 status.go:364] ha-380000 host status = "Stopped" (err=<nil>)
	I0926 17:51:30.536053    3300 status.go:377] host is not running, skipping remaining checks
	I0926 17:51:30.536055    3300 status.go:176] ha-380000 status: &{Name:ha-380000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:51:30.536069    3300 status.go:174] checking status of ha-380000-m02 ...
	I0926 17:51:30.536201    3300 status.go:364] ha-380000-m02 host status = "Stopped" (err=<nil>)
	I0926 17:51:30.536206    3300 status.go:377] host is not running, skipping remaining checks
	I0926 17:51:30.536209    3300 status.go:176] ha-380000-m02 status: &{Name:ha-380000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:51:30.536214    3300 status.go:174] checking status of ha-380000-m03 ...
	I0926 17:51:30.536342    3300 status.go:364] ha-380000-m03 host status = "Stopped" (err=<nil>)
	I0926 17:51:30.536346    3300 status.go:377] host is not running, skipping remaining checks
	I0926 17:51:30.536348    3300 status.go:176] ha-380000-m03 status: &{Name:ha-380000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0926 17:51:30.536353    3300 status.go:174] checking status of ha-380000-m04 ...
	I0926 17:51:30.536476    3300 status.go:364] ha-380000-m04 host status = "Stopped" (err=<nil>)
	I0926 17:51:30.536480    3300 status.go:377] host is not running, skipping remaining checks
	I0926 17:51:30.536482    3300 status.go:176] ha-380000-m04 status: &{Name:ha-380000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr": ha-380000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-380000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-380000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-380000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr": ha-380000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-380000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-380000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-380000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr": ha-380000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-380000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-380000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-380000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000: exit status 7 (33.068458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-380000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-380000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
E0926 17:51:35.083251    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-380000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.187798583s)

                                                
                                                
-- stdout --
	* [ha-380000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-380000" primary control-plane node in "ha-380000" cluster
	* Restarting existing qemu2 VM for "ha-380000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-380000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:51:30.598573    3304 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:51:30.598702    3304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:51:30.598706    3304 out.go:358] Setting ErrFile to fd 2...
	I0926 17:51:30.598708    3304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:51:30.598842    3304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:51:30.599872    3304 out.go:352] Setting JSON to false
	I0926 17:51:30.615954    3304 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3053,"bootTime":1727395237,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:51:30.616023    3304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:51:30.621350    3304 out.go:177] * [ha-380000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:51:30.628503    3304 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:51:30.628565    3304 notify.go:220] Checking for updates...
	I0926 17:51:30.636409    3304 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:51:30.640508    3304 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:51:30.643459    3304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:51:30.646452    3304 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:51:30.649479    3304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:51:30.652682    3304 config.go:182] Loaded profile config "ha-380000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:51:30.652956    3304 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:51:30.657420    3304 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 17:51:30.664577    3304 start.go:297] selected driver: qemu2
	I0926 17:51:30.664583    3304 start.go:901] validating driver "qemu2" against &{Name:ha-380000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-380000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:51:30.664669    3304 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:51:30.666990    3304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:51:30.667017    3304 cni.go:84] Creating CNI manager for ""
	I0926 17:51:30.667039    3304 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0926 17:51:30.667087    3304 start.go:340] cluster config:
	{Name:ha-380000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-380000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:51:30.670796    3304 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:51:30.679287    3304 out.go:177] * Starting "ha-380000" primary control-plane node in "ha-380000" cluster
	I0926 17:51:30.683435    3304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:51:30.683451    3304 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:51:30.683461    3304 cache.go:56] Caching tarball of preloaded images
	I0926 17:51:30.683526    3304 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:51:30.683533    3304 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:51:30.683611    3304 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/ha-380000/config.json ...
	I0926 17:51:30.684057    3304 start.go:360] acquireMachinesLock for ha-380000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:51:30.684099    3304 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "ha-380000"
	I0926 17:51:30.684109    3304 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:51:30.684115    3304 fix.go:54] fixHost starting: 
	I0926 17:51:30.684234    3304 fix.go:112] recreateIfNeeded on ha-380000: state=Stopped err=<nil>
	W0926 17:51:30.684242    3304 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:51:30.686187    3304 out.go:177] * Restarting existing qemu2 VM for "ha-380000" ...
	I0926 17:51:30.694458    3304 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:51:30.694500    3304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:32:6c:62:79:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/disk.qcow2
	I0926 17:51:30.696417    3304 main.go:141] libmachine: STDOUT: 
	I0926 17:51:30.696436    3304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:51:30.696468    3304 fix.go:56] duration metric: took 12.3535ms for fixHost
	I0926 17:51:30.696472    3304 start.go:83] releasing machines lock for "ha-380000", held for 12.3685ms
	W0926 17:51:30.696479    3304 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:51:30.696528    3304 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:51:30.696533    3304 start.go:729] Will try again in 5 seconds ...
	I0926 17:51:35.698697    3304 start.go:360] acquireMachinesLock for ha-380000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:51:35.699088    3304 start.go:364] duration metric: took 317.583µs to acquireMachinesLock for "ha-380000"
	I0926 17:51:35.699204    3304 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:51:35.699224    3304 fix.go:54] fixHost starting: 
	I0926 17:51:35.699887    3304 fix.go:112] recreateIfNeeded on ha-380000: state=Stopped err=<nil>
	W0926 17:51:35.699910    3304 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:51:35.707409    3304 out.go:177] * Restarting existing qemu2 VM for "ha-380000" ...
	I0926 17:51:35.711406    3304 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:51:35.711640    3304 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:32:6c:62:79:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/ha-380000/disk.qcow2
	I0926 17:51:35.720540    3304 main.go:141] libmachine: STDOUT: 
	I0926 17:51:35.720630    3304 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:51:35.720717    3304 fix.go:56] duration metric: took 21.489875ms for fixHost
	I0926 17:51:35.720737    3304 start.go:83] releasing machines lock for "ha-380000", held for 21.628208ms
	W0926 17:51:35.720926    3304 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-380000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-380000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:51:35.727442    3304 out.go:201] 
	W0926 17:51:35.731477    3304 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:51:35.731500    3304 out.go:270] * 
	* 
	W0926 17:51:35.733960    3304 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:51:35.750044    3304 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-380000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000: exit status 7 (66.056458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-380000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-380000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-380000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-380000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-380000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\
"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logv
iewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\
":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000: exit status 7 (29.075334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-380000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-380000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-380000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.492833ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-380000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-380000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:51:35.933387    3319 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:51:35.933542    3319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:51:35.933545    3319 out.go:358] Setting ErrFile to fd 2...
	I0926 17:51:35.933548    3319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:51:35.933975    3319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:51:35.934272    3319 mustload.go:65] Loading cluster: ha-380000
	I0926 17:51:35.934731    3319 config.go:182] Loaded profile config "ha-380000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0926 17:51:35.935030    3319 out.go:270] ! The control-plane node ha-380000 host is not running (will try others): state=Stopped
	! The control-plane node ha-380000 host is not running (will try others): state=Stopped
	W0926 17:51:35.935127    3319 out.go:270] ! The control-plane node ha-380000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-380000-m02 host is not running (will try others): state=Stopped
	I0926 17:51:35.938430    3319 out.go:177] * The control-plane node ha-380000-m03 host is not running: state=Stopped
	I0926 17:51:35.942492    3319 out.go:177]   To start a cluster, run: "minikube start -p ha-380000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-380000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-380000 -n ha-380000: exit status 7 (29.579959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-380000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-415000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-415000 --driver=qemu2 : exit status 80 (9.967320625s)

                                                
                                                
-- stdout --
	* [image-415000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-415000" primary control-plane node in "image-415000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-415000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-415000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-415000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-415000 -n image-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-415000 -n image-415000: exit status 7 (66.88275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-415000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-992000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-992000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.776128791s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4007414-5075-4f2f-9060-cf3d979a83fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-992000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b607fdd-417f-4e62-b087-c40f7339e48a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"c9cc15ec-c7a2-4f89-9fe6-1d9479779ce2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig"}}
	{"specversion":"1.0","id":"aab411f8-48b9-4de0-8312-f8dc5f4bcab6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"7cb7016a-1b7e-4317-ba35-925f30f27438","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d44ebbc3-c162-4e68-9b52-3fc5f156acb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube"}}
	{"specversion":"1.0","id":"7d0dd24d-fa21-4ea4-9be4-ade599831594","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0a3ab072-bea5-4dc6-8cee-0588b4928c7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b446168-18ad-469e-8269-c661ffec43e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"d58aee60-52cf-4781-a02a-1067931514d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-992000\" primary control-plane node in \"json-output-992000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba158361-145c-4351-b0f1-aefb7fe0f9d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"284acdd8-5b1c-4d1e-8ad2-ed1071dd7eaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-992000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"39587925-b1a6-44cc-97f5-783896c71b29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"e16ba234-3cf3-4b59-96c5-850dc218987c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"56f26fdb-c7ed-49c3-a848-d25f06dd5ac0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-992000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"0122c5a4-7cae-48c6-a673-a0497951adba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"c73318ae-35a3-4119-a3cc-91be437a181b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-992000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-992000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-992000 --output=json --user=testUser: exit status 83 (80.685333ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ea5ea230-a8e4-45d6-8ee9-8809c2819e3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-992000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"a9928686-b3ba-419b-9004-3e4cc37bc58e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-992000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-992000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-992000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-992000 --output=json --user=testUser: exit status 83 (44.307042ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-992000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-992000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-992000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-992000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-035000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-035000 --driver=qemu2 : exit status 80 (9.778871125s)

                                                
                                                
-- stdout --
	* [first-035000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-035000" primary control-plane node in "first-035000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-035000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-035000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-035000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-26 17:52:09.828352 -0700 PDT m=+2299.498527709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-042000 -n second-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-042000 -n second-042000: exit status 85 (79.260667ms)

                                                
                                                
-- stdout --
	* Profile "second-042000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-042000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-042000" host is not running, skipping log retrieval (state="* Profile \"second-042000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-042000\"")
helpers_test.go:175: Cleaning up "second-042000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-042000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-26 17:52:10.017827 -0700 PDT m=+2299.688008001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-035000 -n first-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-035000 -n first-035000: exit status 7 (29.218375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-035000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-035000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-035000
--- FAIL: TestMinikubeProfile (10.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-842000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-842000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.031508083s)

                                                
                                                
-- stdout --
	* [mount-start-1-842000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-842000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-842000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-842000 -n mount-start-1-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-842000 -n mount-start-1-842000: exit status 7 (67.761167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-587000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-587000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.892833917s)

                                                
                                                
-- stdout --
	* [multinode-587000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-587000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:52:20.439664    3461 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:52:20.439789    3461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:52:20.439792    3461 out.go:358] Setting ErrFile to fd 2...
	I0926 17:52:20.439795    3461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:52:20.439923    3461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:52:20.440925    3461 out.go:352] Setting JSON to false
	I0926 17:52:20.457062    3461 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3103,"bootTime":1727395237,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:52:20.457134    3461 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:52:20.464788    3461 out.go:177] * [multinode-587000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:52:20.472704    3461 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:52:20.472741    3461 notify.go:220] Checking for updates...
	I0926 17:52:20.480666    3461 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:52:20.483671    3461 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:52:20.486593    3461 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:52:20.489602    3461 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:52:20.492642    3461 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:52:20.495724    3461 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:52:20.499609    3461 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 17:52:20.506609    3461 start.go:297] selected driver: qemu2
	I0926 17:52:20.506615    3461 start.go:901] validating driver "qemu2" against <nil>
	I0926 17:52:20.506622    3461 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:52:20.508930    3461 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:52:20.512671    3461 out.go:177] * Automatically selected the socket_vmnet network
	I0926 17:52:20.515707    3461 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:52:20.515724    3461 cni.go:84] Creating CNI manager for ""
	I0926 17:52:20.515742    3461 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0926 17:52:20.515746    3461 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0926 17:52:20.515783    3461 start.go:340] cluster config:
	{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:52:20.519404    3461 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:52:20.526681    3461 out.go:177] * Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	I0926 17:52:20.530463    3461 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:52:20.530479    3461 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:52:20.530491    3461 cache.go:56] Caching tarball of preloaded images
	I0926 17:52:20.530560    3461 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:52:20.530566    3461 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:52:20.530803    3461 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/multinode-587000/config.json ...
	I0926 17:52:20.530815    3461 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/multinode-587000/config.json: {Name:mk5b24ac2d8db276f35aa4d7ee83b8307afc25e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:52:20.531038    3461 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:52:20.531073    3461 start.go:364] duration metric: took 29µs to acquireMachinesLock for "multinode-587000"
	I0926 17:52:20.531085    3461 start.go:93] Provisioning new machine with config: &{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:52:20.531113    3461 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:52:20.539501    3461 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 17:52:20.556944    3461 start.go:159] libmachine.API.Create for "multinode-587000" (driver="qemu2")
	I0926 17:52:20.556973    3461 client.go:168] LocalClient.Create starting
	I0926 17:52:20.557051    3461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:52:20.557081    3461 main.go:141] libmachine: Decoding PEM data...
	I0926 17:52:20.557091    3461 main.go:141] libmachine: Parsing certificate...
	I0926 17:52:20.557129    3461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:52:20.557157    3461 main.go:141] libmachine: Decoding PEM data...
	I0926 17:52:20.557165    3461 main.go:141] libmachine: Parsing certificate...
	I0926 17:52:20.557509    3461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:52:20.719095    3461 main.go:141] libmachine: Creating SSH key...
	I0926 17:52:20.800100    3461 main.go:141] libmachine: Creating Disk image...
	I0926 17:52:20.800108    3461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:52:20.800284    3461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2
	I0926 17:52:20.809480    3461 main.go:141] libmachine: STDOUT: 
	I0926 17:52:20.809494    3461 main.go:141] libmachine: STDERR: 
	I0926 17:52:20.809554    3461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2 +20000M
	I0926 17:52:20.817292    3461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:52:20.817311    3461 main.go:141] libmachine: STDERR: 
	I0926 17:52:20.817323    3461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2
	I0926 17:52:20.817327    3461 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:52:20.817338    3461 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:52:20.817371    3461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:3e:57:0b:00:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2
	I0926 17:52:20.818967    3461 main.go:141] libmachine: STDOUT: 
	I0926 17:52:20.818984    3461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:52:20.819011    3461 client.go:171] duration metric: took 262.038875ms to LocalClient.Create
	I0926 17:52:22.821169    3461 start.go:128] duration metric: took 2.29009875s to createHost
	I0926 17:52:22.821239    3461 start.go:83] releasing machines lock for "multinode-587000", held for 2.290220041s
	W0926 17:52:22.821294    3461 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:52:22.841577    3461 out.go:177] * Deleting "multinode-587000" in qemu2 ...
	W0926 17:52:22.877333    3461 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:52:22.877349    3461 start.go:729] Will try again in 5 seconds ...
	I0926 17:52:27.879413    3461 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:52:27.879913    3461 start.go:364] duration metric: took 353.083µs to acquireMachinesLock for "multinode-587000"
	I0926 17:52:27.880038    3461 start.go:93] Provisioning new machine with config: &{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:52:27.880376    3461 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:52:27.901017    3461 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 17:52:27.951234    3461 start.go:159] libmachine.API.Create for "multinode-587000" (driver="qemu2")
	I0926 17:52:27.951287    3461 client.go:168] LocalClient.Create starting
	I0926 17:52:27.951402    3461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:52:27.951475    3461 main.go:141] libmachine: Decoding PEM data...
	I0926 17:52:27.951492    3461 main.go:141] libmachine: Parsing certificate...
	I0926 17:52:27.951552    3461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:52:27.951596    3461 main.go:141] libmachine: Decoding PEM data...
	I0926 17:52:27.951610    3461 main.go:141] libmachine: Parsing certificate...
	I0926 17:52:27.952122    3461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:52:28.121722    3461 main.go:141] libmachine: Creating SSH key...
	I0926 17:52:28.225851    3461 main.go:141] libmachine: Creating Disk image...
	I0926 17:52:28.225856    3461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:52:28.226046    3461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2
	I0926 17:52:28.235301    3461 main.go:141] libmachine: STDOUT: 
	I0926 17:52:28.235326    3461 main.go:141] libmachine: STDERR: 
	I0926 17:52:28.235377    3461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2 +20000M
	I0926 17:52:28.243088    3461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:52:28.243101    3461 main.go:141] libmachine: STDERR: 
	I0926 17:52:28.243111    3461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2
	I0926 17:52:28.243116    3461 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:52:28.243124    3461 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:52:28.243153    3461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:d3:63:16:6e:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2
	I0926 17:52:28.244709    3461 main.go:141] libmachine: STDOUT: 
	I0926 17:52:28.244722    3461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:52:28.244734    3461 client.go:171] duration metric: took 293.450917ms to LocalClient.Create
	I0926 17:52:30.246854    3461 start.go:128] duration metric: took 2.366516584s to createHost
	I0926 17:52:30.246932    3461 start.go:83] releasing machines lock for "multinode-587000", held for 2.367057208s
	W0926 17:52:30.247337    3461 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:52:30.268121    3461 out.go:201] 
	W0926 17:52:30.272088    3461 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:52:30.272160    3461 out.go:270] * 
	* 
	W0926 17:52:30.274628    3461 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:52:30.289961    3461 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-587000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (68.434583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (106.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (128.919667ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-587000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- rollout status deployment/busybox: exit status 1 (57.924167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.289375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0926 17:52:30.619774    1597 retry.go:31] will retry after 1.01475523s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.537958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0926 17:52:31.742430    1597 retry.go:31] will retry after 928.657616ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.3395ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0926 17:52:32.779856    1597 retry.go:31] will retry after 3.269215473s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.600709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0926 17:52:36.153685    1597 retry.go:31] will retry after 4.041974517s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.482292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0926 17:52:40.302449    1597 retry.go:31] will retry after 2.91042793s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.444041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0926 17:52:43.318615    1597 retry.go:31] will retry after 9.942121714s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.572416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0926 17:52:53.366860    1597 retry.go:31] will retry after 14.100895928s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.115125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0926 17:53:07.572871    1597 retry.go:31] will retry after 23.494745244s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.216375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0926 17:53:31.172981    1597 retry.go:31] will retry after 19.937812319s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.64875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0926 17:53:51.216416    1597 retry.go:31] will retry after 25.588585375s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.42675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.137ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.655167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.568708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.733667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (30.261834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (106.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-587000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.285209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (29.631541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-587000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-587000 -v 3 --alsologtostderr: exit status 83 (43.79ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-587000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-587000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:17.279754    3551 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:17.279898    3551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:17.279902    3551 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:17.279904    3551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:17.280036    3551 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:17.280276    3551 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:17.280482    3551 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:17.285413    3551 out.go:177] * The control-plane node multinode-587000 host is not running: state=Stopped
	I0926 17:54:17.290471    3551 out.go:177]   To start a cluster, run: "minikube start -p multinode-587000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-587000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (28.895709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-587000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-587000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.7065ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-587000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-587000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-587000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (30.118083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-587000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-587000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-587000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-587000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (29.389208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status --output json --alsologtostderr: exit status 7 (30.073167ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-587000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:17.486194    3563 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:17.486344    3563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:17.486347    3563 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:17.486349    3563 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:17.486482    3563 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:17.486604    3563 out.go:352] Setting JSON to true
	I0926 17:54:17.486615    3563 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:17.486673    3563 notify.go:220] Checking for updates...
	I0926 17:54:17.486834    3563 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:17.486842    3563 status.go:174] checking status of multinode-587000 ...
	I0926 17:54:17.487073    3563 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:54:17.487076    3563 status.go:377] host is not running, skipping remaining checks
	I0926 17:54:17.487078    3563 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-587000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (29.835084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 node stop m03: exit status 85 (46.844875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-587000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status: exit status 7 (29.055709ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr: exit status 7 (29.765291ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:17.622618    3571 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:17.622742    3571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:17.622745    3571 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:17.622747    3571 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:17.622874    3571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:17.622992    3571 out.go:352] Setting JSON to false
	I0926 17:54:17.623004    3571 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:17.623069    3571 notify.go:220] Checking for updates...
	I0926 17:54:17.623201    3571 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:17.623210    3571 status.go:174] checking status of multinode-587000 ...
	I0926 17:54:17.623447    3571 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:54:17.623451    3571 status.go:377] host is not running, skipping remaining checks
	I0926 17:54:17.623453    3571 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr": multinode-587000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (29.806458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (52.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.85925ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:17.682753    3575 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:17.682970    3575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:17.682974    3575 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:17.682976    3575 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:17.683096    3575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:17.683319    3575 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:17.683508    3575 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:17.688403    3575 out.go:201] 
	W0926 17:54:17.691436    3575 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0926 17:54:17.691441    3575 out.go:270] * 
	* 
	W0926 17:54:17.693126    3575 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:54:17.696312    3575 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0926 17:54:17.682753    3575 out.go:345] Setting OutFile to fd 1 ...
I0926 17:54:17.682970    3575 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:54:17.682974    3575 out.go:358] Setting ErrFile to fd 2...
I0926 17:54:17.682976    3575 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:54:17.683096    3575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
I0926 17:54:17.683319    3575 mustload.go:65] Loading cluster: multinode-587000
I0926 17:54:17.683508    3575 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:54:17.688403    3575 out.go:201] 
W0926 17:54:17.691436    3575 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0926 17:54:17.691441    3575 out.go:270] * 
* 
W0926 17:54:17.693126    3575 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0926 17:54:17.696312    3575 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-587000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (30.016958ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:17.729716    3577 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:17.729878    3577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:17.729881    3577 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:17.729884    3577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:17.730022    3577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:17.730146    3577 out.go:352] Setting JSON to false
	I0926 17:54:17.730157    3577 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:17.730208    3577 notify.go:220] Checking for updates...
	I0926 17:54:17.730395    3577 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:17.730403    3577 status.go:174] checking status of multinode-587000 ...
	I0926 17:54:17.730645    3577 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:54:17.730649    3577 status.go:377] host is not running, skipping remaining checks
	I0926 17:54:17.730650    3577 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0926 17:54:17.731491    1597 retry.go:31] will retry after 544.292717ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (72.077875ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:18.347972    3579 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:18.348167    3579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:18.348171    3579 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:18.348175    3579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:18.348370    3579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:18.348512    3579 out.go:352] Setting JSON to false
	I0926 17:54:18.348527    3579 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:18.348560    3579 notify.go:220] Checking for updates...
	I0926 17:54:18.348806    3579 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:18.348820    3579 status.go:174] checking status of multinode-587000 ...
	I0926 17:54:18.349118    3579 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:54:18.349124    3579 status.go:377] host is not running, skipping remaining checks
	I0926 17:54:18.349126    3579 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0926 17:54:18.350210    1597 retry.go:31] will retry after 1.979146561s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (74.598333ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:20.404094    3581 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:20.404281    3581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:20.404286    3581 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:20.404289    3581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:20.404459    3581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:20.404614    3581 out.go:352] Setting JSON to false
	I0926 17:54:20.404629    3581 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:20.404666    3581 notify.go:220] Checking for updates...
	I0926 17:54:20.404884    3581 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:20.404896    3581 status.go:174] checking status of multinode-587000 ...
	I0926 17:54:20.405220    3581 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:54:20.405225    3581 status.go:377] host is not running, skipping remaining checks
	I0926 17:54:20.405228    3581 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0926 17:54:20.406260    1597 retry.go:31] will retry after 2.988049138s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (73.988459ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:23.468365    3583 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:23.468585    3583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:23.468590    3583 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:23.468593    3583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:23.468796    3583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:23.468956    3583 out.go:352] Setting JSON to false
	I0926 17:54:23.468971    3583 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:23.469011    3583 notify.go:220] Checking for updates...
	I0926 17:54:23.469256    3583 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:23.469266    3583 status.go:174] checking status of multinode-587000 ...
	I0926 17:54:23.469579    3583 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:54:23.469584    3583 status.go:377] host is not running, skipping remaining checks
	I0926 17:54:23.469586    3583 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0926 17:54:23.470642    1597 retry.go:31] will retry after 4.068066734s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (74.874292ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:27.613472    3585 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:27.613696    3585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:27.613702    3585 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:27.613705    3585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:27.613914    3585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:27.614085    3585 out.go:352] Setting JSON to false
	I0926 17:54:27.614114    3585 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:27.614156    3585 notify.go:220] Checking for updates...
	I0926 17:54:27.614416    3585 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:27.614427    3585 status.go:174] checking status of multinode-587000 ...
	I0926 17:54:27.614797    3585 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:54:27.614803    3585 status.go:377] host is not running, skipping remaining checks
	I0926 17:54:27.614806    3585 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0926 17:54:27.615876    1597 retry.go:31] will retry after 6.231703954s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (72.370333ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:33.919909    3587 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:33.920118    3587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:33.920123    3587 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:33.920127    3587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:33.920325    3587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:33.920487    3587 out.go:352] Setting JSON to false
	I0926 17:54:33.920500    3587 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:33.920554    3587 notify.go:220] Checking for updates...
	I0926 17:54:33.920795    3587 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:33.920809    3587 status.go:174] checking status of multinode-587000 ...
	I0926 17:54:33.921108    3587 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:54:33.921112    3587 status.go:377] host is not running, skipping remaining checks
	I0926 17:54:33.921115    3587 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0926 17:54:33.922163    1597 retry.go:31] will retry after 9.529216917s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (73.223125ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:43.524436    3589 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:43.524632    3589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:43.524637    3589 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:43.524640    3589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:43.524837    3589 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:43.524997    3589 out.go:352] Setting JSON to false
	I0926 17:54:43.525012    3589 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:43.525054    3589 notify.go:220] Checking for updates...
	I0926 17:54:43.525292    3589 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:43.525307    3589 status.go:174] checking status of multinode-587000 ...
	I0926 17:54:43.525648    3589 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:54:43.525652    3589 status.go:377] host is not running, skipping remaining checks
	I0926 17:54:43.525655    3589 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0926 17:54:43.526718    1597 retry.go:31] will retry after 7.313864559s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (72.792875ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:54:50.913346    3594 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:54:50.913535    3594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:50.913539    3594 out.go:358] Setting ErrFile to fd 2...
	I0926 17:54:50.913542    3594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:54:50.913709    3594 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:54:50.913856    3594 out.go:352] Setting JSON to false
	I0926 17:54:50.913871    3594 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:54:50.913924    3594 notify.go:220] Checking for updates...
	I0926 17:54:50.914153    3594 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:54:50.914163    3594 status.go:174] checking status of multinode-587000 ...
	I0926 17:54:50.914491    3594 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:54:50.914496    3594 status.go:377] host is not running, skipping remaining checks
	I0926 17:54:50.914498    3594 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0926 17:54:50.915584    1597 retry.go:31] will retry after 19.129754348s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr: exit status 7 (73.396916ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:55:10.118540    3599 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:55:10.118726    3599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:10.118730    3599 out.go:358] Setting ErrFile to fd 2...
	I0926 17:55:10.118734    3599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:10.118902    3599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:55:10.119068    3599 out.go:352] Setting JSON to false
	I0926 17:55:10.119085    3599 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:55:10.119126    3599 notify.go:220] Checking for updates...
	I0926 17:55:10.119359    3599 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:55:10.119377    3599 status.go:174] checking status of multinode-587000 ...
	I0926 17:55:10.119684    3599 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:55:10.119689    3599 status.go:377] host is not running, skipping remaining checks
	I0926 17:55:10.119691    3599 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-587000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (32.539083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (52.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-587000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-587000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-587000: (3.318850667s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-587000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-587000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.22526575s)

                                                
                                                
-- stdout --
	* [multinode-587000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	* Restarting existing qemu2 VM for "multinode-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:55:13.565265    3623 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:55:13.565416    3623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:13.565420    3623 out.go:358] Setting ErrFile to fd 2...
	I0926 17:55:13.565424    3623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:13.565619    3623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:55:13.566859    3623 out.go:352] Setting JSON to false
	I0926 17:55:13.586515    3623 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3276,"bootTime":1727395237,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:55:13.586584    3623 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:55:13.591883    3623 out.go:177] * [multinode-587000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:55:13.598781    3623 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:55:13.598818    3623 notify.go:220] Checking for updates...
	I0926 17:55:13.604775    3623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:55:13.607791    3623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:55:13.610763    3623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:55:13.613735    3623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:55:13.616832    3623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:55:13.620105    3623 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:55:13.620168    3623 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:55:13.624672    3623 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 17:55:13.631830    3623 start.go:297] selected driver: qemu2
	I0926 17:55:13.631842    3623 start.go:901] validating driver "qemu2" against &{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:55:13.631908    3623 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:55:13.634489    3623 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:55:13.634517    3623 cni.go:84] Creating CNI manager for ""
	I0926 17:55:13.634541    3623 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0926 17:55:13.634592    3623 start.go:340] cluster config:
	{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-587000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:55:13.638526    3623 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:55:13.645752    3623 out.go:177] * Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	I0926 17:55:13.649643    3623 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:55:13.649662    3623 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:55:13.649673    3623 cache.go:56] Caching tarball of preloaded images
	I0926 17:55:13.649743    3623 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:55:13.649749    3623 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:55:13.649815    3623 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/multinode-587000/config.json ...
	I0926 17:55:13.650275    3623 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:55:13.650314    3623 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "multinode-587000"
	I0926 17:55:13.650323    3623 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:55:13.650328    3623 fix.go:54] fixHost starting: 
	I0926 17:55:13.650457    3623 fix.go:112] recreateIfNeeded on multinode-587000: state=Stopped err=<nil>
	W0926 17:55:13.650467    3623 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:55:13.654743    3623 out.go:177] * Restarting existing qemu2 VM for "multinode-587000" ...
	I0926 17:55:13.662802    3623 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:55:13.662844    3623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:d3:63:16:6e:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2
	I0926 17:55:13.664922    3623 main.go:141] libmachine: STDOUT: 
	I0926 17:55:13.664941    3623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:55:13.664972    3623 fix.go:56] duration metric: took 14.643209ms for fixHost
	I0926 17:55:13.664976    3623 start.go:83] releasing machines lock for "multinode-587000", held for 14.658084ms
	W0926 17:55:13.664983    3623 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:55:13.665017    3623 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:55:13.665021    3623 start.go:729] Will try again in 5 seconds ...
	I0926 17:55:18.667204    3623 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:55:18.667635    3623 start.go:364] duration metric: took 325.958µs to acquireMachinesLock for "multinode-587000"
	I0926 17:55:18.667797    3623 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:55:18.667824    3623 fix.go:54] fixHost starting: 
	I0926 17:55:18.668562    3623 fix.go:112] recreateIfNeeded on multinode-587000: state=Stopped err=<nil>
	W0926 17:55:18.668588    3623 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:55:18.673060    3623 out.go:177] * Restarting existing qemu2 VM for "multinode-587000" ...
	I0926 17:55:18.679977    3623 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:55:18.680201    3623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:d3:63:16:6e:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2
	I0926 17:55:18.689811    3623 main.go:141] libmachine: STDOUT: 
	I0926 17:55:18.689888    3623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:55:18.690031    3623 fix.go:56] duration metric: took 22.2095ms for fixHost
	I0926 17:55:18.690053    3623 start.go:83] releasing machines lock for "multinode-587000", held for 22.393333ms
	W0926 17:55:18.690262    3623 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:55:18.698028    3623 out.go:201] 
	W0926 17:55:18.702107    3623 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:55:18.702134    3623 out.go:270] * 
	* 
	W0926 17:55:18.704826    3623 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:55:18.712036    3623 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-587000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-587000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (32.987083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 node delete m03: exit status 83 (39.363292ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-587000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-587000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-587000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr: exit status 7 (29.713ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:55:18.896053    3640 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:55:18.896205    3640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:18.896212    3640 out.go:358] Setting ErrFile to fd 2...
	I0926 17:55:18.896214    3640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:18.896349    3640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:55:18.896463    3640 out.go:352] Setting JSON to false
	I0926 17:55:18.896478    3640 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:55:18.896543    3640 notify.go:220] Checking for updates...
	I0926 17:55:18.896677    3640 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:55:18.896685    3640 status.go:174] checking status of multinode-587000 ...
	I0926 17:55:18.896908    3640 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:55:18.896912    3640 status.go:377] host is not running, skipping remaining checks
	I0926 17:55:18.896914    3640 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (29.736375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-587000 stop: (2.044328666s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status: exit status 7 (66.871875ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr: exit status 7 (32.257542ms)

                                                
                                                
-- stdout --
	multinode-587000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:55:21.069849    3656 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:55:21.070003    3656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:21.070006    3656 out.go:358] Setting ErrFile to fd 2...
	I0926 17:55:21.070008    3656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:21.070128    3656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:55:21.070247    3656 out.go:352] Setting JSON to false
	I0926 17:55:21.070258    3656 mustload.go:65] Loading cluster: multinode-587000
	I0926 17:55:21.070335    3656 notify.go:220] Checking for updates...
	I0926 17:55:21.070466    3656 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:55:21.070474    3656 status.go:174] checking status of multinode-587000 ...
	I0926 17:55:21.070716    3656 status.go:364] multinode-587000 host status = "Stopped" (err=<nil>)
	I0926 17:55:21.070719    3656 status.go:377] host is not running, skipping remaining checks
	I0926 17:55:21.070721    3656 status.go:176] multinode-587000 status: &{Name:multinode-587000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr": multinode-587000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-587000 status --alsologtostderr": multinode-587000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (30.084459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-587000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-587000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.1806775s)

                                                
                                                
-- stdout --
	* [multinode-587000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	* Restarting existing qemu2 VM for "multinode-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:55:21.129367    3660 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:55:21.129494    3660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:21.129498    3660 out.go:358] Setting ErrFile to fd 2...
	I0926 17:55:21.129500    3660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:21.129622    3660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:55:21.130695    3660 out.go:352] Setting JSON to false
	I0926 17:55:21.146888    3660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3284,"bootTime":1727395237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:55:21.146993    3660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:55:21.151868    3660 out.go:177] * [multinode-587000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:55:21.158062    3660 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:55:21.158076    3660 notify.go:220] Checking for updates...
	I0926 17:55:21.165950    3660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:55:21.168981    3660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:55:21.170420    3660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:55:21.174010    3660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:55:21.176971    3660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:55:21.180335    3660 config.go:182] Loaded profile config "multinode-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:55:21.180598    3660 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:55:21.184865    3660 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 17:55:21.191983    3660 start.go:297] selected driver: qemu2
	I0926 17:55:21.191991    3660 start.go:901] validating driver "qemu2" against &{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-587000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:55:21.192064    3660 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:55:21.194387    3660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:55:21.194410    3660 cni.go:84] Creating CNI manager for ""
	I0926 17:55:21.194437    3660 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0926 17:55:21.194497    3660 start.go:340] cluster config:
	{Name:multinode-587000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-587000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:55:21.198177    3660 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:55:21.204963    3660 out.go:177] * Starting "multinode-587000" primary control-plane node in "multinode-587000" cluster
	I0926 17:55:21.208851    3660 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:55:21.208868    3660 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:55:21.208875    3660 cache.go:56] Caching tarball of preloaded images
	I0926 17:55:21.208921    3660 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:55:21.208926    3660 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 17:55:21.208988    3660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/multinode-587000/config.json ...
	I0926 17:55:21.209417    3660 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:55:21.209453    3660 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "multinode-587000"
	I0926 17:55:21.209462    3660 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:55:21.209468    3660 fix.go:54] fixHost starting: 
	I0926 17:55:21.209586    3660 fix.go:112] recreateIfNeeded on multinode-587000: state=Stopped err=<nil>
	W0926 17:55:21.209594    3660 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:55:21.216888    3660 out.go:177] * Restarting existing qemu2 VM for "multinode-587000" ...
	I0926 17:55:21.220951    3660 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:55:21.220998    3660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:d3:63:16:6e:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2
	I0926 17:55:21.223094    3660 main.go:141] libmachine: STDOUT: 
	I0926 17:55:21.223116    3660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:55:21.223149    3660 fix.go:56] duration metric: took 13.680833ms for fixHost
	I0926 17:55:21.223153    3660 start.go:83] releasing machines lock for "multinode-587000", held for 13.695625ms
	W0926 17:55:21.223160    3660 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:55:21.223201    3660 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:55:21.223206    3660 start.go:729] Will try again in 5 seconds ...
	I0926 17:55:26.225341    3660 start.go:360] acquireMachinesLock for multinode-587000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:55:26.225789    3660 start.go:364] duration metric: took 350.459µs to acquireMachinesLock for "multinode-587000"
	I0926 17:55:26.225925    3660 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:55:26.225949    3660 fix.go:54] fixHost starting: 
	I0926 17:55:26.226647    3660 fix.go:112] recreateIfNeeded on multinode-587000: state=Stopped err=<nil>
	W0926 17:55:26.226672    3660 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:55:26.235138    3660 out.go:177] * Restarting existing qemu2 VM for "multinode-587000" ...
	I0926 17:55:26.239165    3660 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:55:26.239502    3660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:d3:63:16:6e:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/multinode-587000/disk.qcow2
	I0926 17:55:26.248545    3660 main.go:141] libmachine: STDOUT: 
	I0926 17:55:26.248646    3660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:55:26.248762    3660 fix.go:56] duration metric: took 22.816875ms for fixHost
	I0926 17:55:26.248783    3660 start.go:83] releasing machines lock for "multinode-587000", held for 22.96925ms
	W0926 17:55:26.249004    3660 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:55:26.255171    3660 out.go:201] 
	W0926 17:55:26.259246    3660 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:55:26.259268    3660 out.go:270] * 
	* 
	W0926 17:55:26.261544    3660 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:55:26.269195    3660 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-587000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (66.440625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-587000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-587000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-587000-m01 --driver=qemu2 : exit status 80 (9.87905775s)

                                                
                                                
-- stdout --
	* [multinode-587000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-587000-m01" primary control-plane node in "multinode-587000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-587000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-587000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-587000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-587000-m02 --driver=qemu2 : exit status 80 (9.973404208s)

                                                
                                                
-- stdout --
	* [multinode-587000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-587000-m02" primary control-plane node in "multinode-587000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-587000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-587000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-587000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-587000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-587000: exit status 83 (81.804042ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-587000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-587000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-587000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-587000 -n multinode-587000: exit status 7 (30.028541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.08s)

                                                
                                    
x
+
TestPreload (10.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-872000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0926 17:55:48.277641    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-872000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.002350458s)

                                                
                                                
-- stdout --
	* [test-preload-872000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-872000" primary control-plane node in "test-preload-872000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-872000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:55:46.574042    3717 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:55:46.574162    3717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:46.574165    3717 out.go:358] Setting ErrFile to fd 2...
	I0926 17:55:46.574168    3717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:55:46.574298    3717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:55:46.575366    3717 out.go:352] Setting JSON to false
	I0926 17:55:46.591500    3717 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3309,"bootTime":1727395237,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:55:46.591579    3717 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:55:46.598015    3717 out.go:177] * [test-preload-872000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:55:46.606038    3717 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:55:46.606103    3717 notify.go:220] Checking for updates...
	I0926 17:55:46.611968    3717 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:55:46.615028    3717 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:55:46.618002    3717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:55:46.620980    3717 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:55:46.623973    3717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:55:46.627286    3717 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:55:46.627336    3717 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:55:46.631931    3717 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 17:55:46.638019    3717 start.go:297] selected driver: qemu2
	I0926 17:55:46.638027    3717 start.go:901] validating driver "qemu2" against <nil>
	I0926 17:55:46.638036    3717 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:55:46.640493    3717 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:55:46.643993    3717 out.go:177] * Automatically selected the socket_vmnet network
	I0926 17:55:46.647007    3717 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 17:55:46.647028    3717 cni.go:84] Creating CNI manager for ""
	I0926 17:55:46.647053    3717 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:55:46.647057    3717 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 17:55:46.647083    3717 start.go:340] cluster config:
	{Name:test-preload-872000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-872000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:55:46.650941    3717 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:55:46.658983    3717 out.go:177] * Starting "test-preload-872000" primary control-plane node in "test-preload-872000" cluster
	I0926 17:55:46.662998    3717 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0926 17:55:46.663058    3717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/test-preload-872000/config.json ...
	I0926 17:55:46.663072    3717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/test-preload-872000/config.json: {Name:mk393c94a51e4199a7fed7a26b2a233e0cc3ee20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:55:46.663089    3717 cache.go:107] acquiring lock: {Name:mke80ef261c2733d404098c19fbd6c48078e2c2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:55:46.663085    3717 cache.go:107] acquiring lock: {Name:mka2794e14c3d83963291f7ccf8a15aef76e08bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:55:46.663091    3717 cache.go:107] acquiring lock: {Name:mkcd186008d81977a68c7499aca761447438bf00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:55:46.663115    3717 cache.go:107] acquiring lock: {Name:mkd78727eb01327e17486980973260f9d64e4ccc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:55:46.663335    3717 cache.go:107] acquiring lock: {Name:mk8a08478b8494b7cde969be4706be019c64d02b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:55:46.663358    3717 cache.go:107] acquiring lock: {Name:mk5113437acf97fdb923e71daf9308c240a62bb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:55:46.663376    3717 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0926 17:55:46.663356    3717 start.go:360] acquireMachinesLock for test-preload-872000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:55:46.663403    3717 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0926 17:55:46.663367    3717 cache.go:107] acquiring lock: {Name:mkf921f27ccb3036ec1ff9ce604f30b3adf6f0f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:55:46.663388    3717 cache.go:107] acquiring lock: {Name:mk1e1a4df7e72a67d39031f17f4b7abc3f154393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:55:46.663380    3717 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 17:55:46.663495    3717 start.go:364] duration metric: took 81µs to acquireMachinesLock for "test-preload-872000"
	I0926 17:55:46.663505    3717 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0926 17:55:46.663579    3717 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0926 17:55:46.663628    3717 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0926 17:55:46.663638    3717 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0926 17:55:46.663549    3717 start.go:93] Provisioning new machine with config: &{Name:test-preload-872000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-872000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:55:46.663665    3717 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0926 17:55:46.663667    3717 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:55:46.670898    3717 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 17:55:46.673906    3717 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0926 17:55:46.674903    3717 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0926 17:55:46.674921    3717 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0926 17:55:46.675117    3717 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0926 17:55:46.676652    3717 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0926 17:55:46.676652    3717 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 17:55:46.676701    3717 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0926 17:55:46.676703    3717 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0926 17:55:46.689428    3717 start.go:159] libmachine.API.Create for "test-preload-872000" (driver="qemu2")
	I0926 17:55:46.689448    3717 client.go:168] LocalClient.Create starting
	I0926 17:55:46.689545    3717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:55:46.689577    3717 main.go:141] libmachine: Decoding PEM data...
	I0926 17:55:46.689589    3717 main.go:141] libmachine: Parsing certificate...
	I0926 17:55:46.689635    3717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:55:46.689658    3717 main.go:141] libmachine: Decoding PEM data...
	I0926 17:55:46.689666    3717 main.go:141] libmachine: Parsing certificate...
	I0926 17:55:46.689990    3717 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:55:46.853691    3717 main.go:141] libmachine: Creating SSH key...
	I0926 17:55:47.078545    3717 main.go:141] libmachine: Creating Disk image...
	I0926 17:55:47.078563    3717 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:55:47.078725    3717 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2
	I0926 17:55:47.088295    3717 main.go:141] libmachine: STDOUT: 
	I0926 17:55:47.088314    3717 main.go:141] libmachine: STDERR: 
	I0926 17:55:47.088366    3717 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2 +20000M
	I0926 17:55:47.097753    3717 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:55:47.097780    3717 main.go:141] libmachine: STDERR: 
	I0926 17:55:47.097963    3717 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2
	I0926 17:55:47.097974    3717 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:55:47.097993    3717 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:55:47.098023    3717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:1f:73:de:8d:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2
	I0926 17:55:47.100138    3717 main.go:141] libmachine: STDOUT: 
	I0926 17:55:47.100151    3717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:55:47.100172    3717 client.go:171] duration metric: took 410.730708ms to LocalClient.Create
	I0926 17:55:47.125737    3717 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0926 17:55:47.139013    3717 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0926 17:55:47.141184    3717 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0926 17:55:47.179306    3717 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0926 17:55:47.196693    3717 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0926 17:55:47.238463    3717 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0926 17:55:47.254381    3717 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0926 17:55:47.254403    3717 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 591.330833ms
	I0926 17:55:47.254440    3717 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0926 17:55:47.301477    3717 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0926 17:55:47.301525    3717 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W0926 17:55:48.040865    3717 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0926 17:55:48.040974    3717 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0926 17:55:49.022194    3717 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0926 17:55:49.022260    3717 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.359237667s
	I0926 17:55:49.022321    3717 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0926 17:55:49.100310    3717 start.go:128] duration metric: took 2.4366795s to createHost
	I0926 17:55:49.100349    3717 start.go:83] releasing machines lock for "test-preload-872000", held for 2.436906333s
	W0926 17:55:49.100404    3717 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:55:49.124720    3717 out.go:177] * Deleting "test-preload-872000" in qemu2 ...
	W0926 17:55:49.156434    3717 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:55:49.156464    3717 start.go:729] Will try again in 5 seconds ...
	I0926 17:55:49.451767    3717 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0926 17:55:49.451816    3717 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.788631791s
	I0926 17:55:49.451839    3717 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0926 17:55:49.505358    3717 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0926 17:55:49.505407    3717 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.842150209s
	I0926 17:55:49.505457    3717 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0926 17:55:51.168233    3717 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0926 17:55:51.168276    3717 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.505060416s
	I0926 17:55:51.168323    3717 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0926 17:55:51.538887    3717 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0926 17:55:51.538940    3717 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.8759835s
	I0926 17:55:51.538973    3717 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0926 17:55:52.024427    3717 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0926 17:55:52.024479    3717 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.361313416s
	I0926 17:55:52.024509    3717 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0926 17:55:54.156566    3717 start.go:360] acquireMachinesLock for test-preload-872000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:55:54.156996    3717 start.go:364] duration metric: took 348.375µs to acquireMachinesLock for "test-preload-872000"
	I0926 17:55:54.157095    3717 start.go:93] Provisioning new machine with config: &{Name:test-preload-872000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-872000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 17:55:54.157383    3717 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 17:55:54.163039    3717 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 17:55:54.214039    3717 start.go:159] libmachine.API.Create for "test-preload-872000" (driver="qemu2")
	I0926 17:55:54.214080    3717 client.go:168] LocalClient.Create starting
	I0926 17:55:54.214205    3717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 17:55:54.214266    3717 main.go:141] libmachine: Decoding PEM data...
	I0926 17:55:54.214281    3717 main.go:141] libmachine: Parsing certificate...
	I0926 17:55:54.214354    3717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 17:55:54.214405    3717 main.go:141] libmachine: Decoding PEM data...
	I0926 17:55:54.214417    3717 main.go:141] libmachine: Parsing certificate...
	I0926 17:55:54.214922    3717 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 17:55:54.384165    3717 main.go:141] libmachine: Creating SSH key...
	I0926 17:55:54.468968    3717 main.go:141] libmachine: Creating Disk image...
	I0926 17:55:54.468974    3717 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 17:55:54.469122    3717 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2
	I0926 17:55:54.478385    3717 main.go:141] libmachine: STDOUT: 
	I0926 17:55:54.478400    3717 main.go:141] libmachine: STDERR: 
	I0926 17:55:54.478463    3717 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2 +20000M
	I0926 17:55:54.486615    3717 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 17:55:54.486630    3717 main.go:141] libmachine: STDERR: 
	I0926 17:55:54.486642    3717 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2
	I0926 17:55:54.486657    3717 main.go:141] libmachine: Starting QEMU VM...
	I0926 17:55:54.486663    3717 qemu.go:418] Using hvf for hardware acceleration
	I0926 17:55:54.486692    3717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:18:d5:ee:de:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/test-preload-872000/disk.qcow2
	I0926 17:55:54.488493    3717 main.go:141] libmachine: STDOUT: 
	I0926 17:55:54.488508    3717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 17:55:54.488521    3717 client.go:171] duration metric: took 274.442916ms to LocalClient.Create
	I0926 17:55:56.224982    3717 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0926 17:55:56.225048    3717 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.562194708s
	I0926 17:55:56.225075    3717 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0926 17:55:56.225122    3717 cache.go:87] Successfully saved all images to host disk.
	I0926 17:55:56.490678    3717 start.go:128] duration metric: took 2.333338083s to createHost
	I0926 17:55:56.490728    3717 start.go:83] releasing machines lock for "test-preload-872000", held for 2.333771792s
	W0926 17:55:56.491052    3717 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-872000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-872000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 17:55:56.509449    3717 out.go:201] 
	W0926 17:55:56.513683    3717 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 17:55:56.513707    3717 out.go:270] * 
	* 
	W0926 17:55:56.516077    3717 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:55:56.533554    3717 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-872000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-26 17:55:56.551781 -0700 PDT m=+2526.228289667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-872000 -n test-preload-872000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-872000 -n test-preload-872000: exit status 7 (65.070792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-872000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-872000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-872000
--- FAIL: TestPreload (10.15s)

                                                
                                    
x
+
TestScheduledStopUnix (10.29s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-774000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-774000 --memory=2048 --driver=qemu2 : exit status 80 (10.13949225s)

                                                
                                                
-- stdout --
	* [scheduled-stop-774000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-774000" primary control-plane node in "scheduled-stop-774000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-774000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-774000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-774000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-774000" primary control-plane node in "scheduled-stop-774000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-774000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-774000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-26 17:56:06.837938 -0700 PDT m=+2536.514733626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-774000 -n scheduled-stop-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-774000 -n scheduled-stop-774000: exit status 7 (66.400375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-774000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-774000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-774000
--- FAIL: TestScheduledStopUnix (10.29s)

                                                
                                    
x
+
TestSkaffold (12.68s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe4166121663 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe4166121663 version: (1.057322458s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-279000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-279000 --memory=2600 --driver=qemu2 : exit status 80 (9.878639292s)

                                                
                                                
-- stdout --
	* [skaffold-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-279000" primary control-plane node in "skaffold-279000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-279000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-279000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-279000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-279000" primary control-plane node in "skaffold-279000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-279000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-279000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-26 17:56:19.51495 -0700 PDT m=+2549.192100001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-279000 -n skaffold-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-279000 -n skaffold-279000: exit status 7 (63.531167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-279000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-279000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-279000
--- FAIL: TestSkaffold (12.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (596.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.472581440 start -p running-upgrade-937000 --memory=2200 --vm-driver=qemu2 
E0926 17:57:11.360248    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.472581440 start -p running-upgrade-937000 --memory=2200 --vm-driver=qemu2 : (56.166752875s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-937000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-937000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.667976s)

                                                
                                                
-- stdout --
	* [running-upgrade-937000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-937000" primary control-plane node in "running-upgrade-937000" cluster
	* Updating the running qemu2 "running-upgrade-937000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:57:59.475797    4114 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:57:59.475930    4114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:57:59.475933    4114 out.go:358] Setting ErrFile to fd 2...
	I0926 17:57:59.475936    4114 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:57:59.476073    4114 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:57:59.477117    4114 out.go:352] Setting JSON to false
	I0926 17:57:59.493327    4114 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3442,"bootTime":1727395237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:57:59.493404    4114 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:57:59.496463    4114 out.go:177] * [running-upgrade-937000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:57:59.503789    4114 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:57:59.503826    4114 notify.go:220] Checking for updates...
	I0926 17:57:59.510702    4114 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:57:59.514696    4114 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:57:59.517695    4114 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:57:59.520757    4114 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:57:59.523774    4114 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:57:59.526954    4114 config.go:182] Loaded profile config "running-upgrade-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 17:57:59.530682    4114 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0926 17:57:59.533711    4114 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:57:59.537658    4114 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 17:57:59.544737    4114 start.go:297] selected driver: qemu2
	I0926 17:57:59.544744    4114 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50284 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0926 17:57:59.544793    4114 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:57:59.547066    4114 cni.go:84] Creating CNI manager for ""
	I0926 17:57:59.547093    4114 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:57:59.547113    4114 start.go:340] cluster config:
	{Name:running-upgrade-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50284 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0926 17:57:59.547166    4114 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:57:59.554712    4114 out.go:177] * Starting "running-upgrade-937000" primary control-plane node in "running-upgrade-937000" cluster
	I0926 17:57:59.558783    4114 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0926 17:57:59.558797    4114 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0926 17:57:59.558805    4114 cache.go:56] Caching tarball of preloaded images
	I0926 17:57:59.558853    4114 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 17:57:59.558858    4114 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0926 17:57:59.558917    4114 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/config.json ...
	I0926 17:57:59.559337    4114 start.go:360] acquireMachinesLock for running-upgrade-937000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 17:57:59.559362    4114 start.go:364] duration metric: took 20.208µs to acquireMachinesLock for "running-upgrade-937000"
	I0926 17:57:59.559369    4114 start.go:96] Skipping create...Using existing machine configuration
	I0926 17:57:59.559375    4114 fix.go:54] fixHost starting: 
	I0926 17:57:59.560025    4114 fix.go:112] recreateIfNeeded on running-upgrade-937000: state=Running err=<nil>
	W0926 17:57:59.560033    4114 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 17:57:59.568717    4114 out.go:177] * Updating the running qemu2 "running-upgrade-937000" VM ...
	I0926 17:57:59.572750    4114 machine.go:93] provisionDockerMachine start ...
	I0926 17:57:59.572794    4114 main.go:141] libmachine: Using SSH client type: native
	I0926 17:57:59.572912    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b7dc00] 0x104b80440 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0926 17:57:59.572918    4114 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 17:57:59.633033    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-937000
	
	I0926 17:57:59.633047    4114 buildroot.go:166] provisioning hostname "running-upgrade-937000"
	I0926 17:57:59.633101    4114 main.go:141] libmachine: Using SSH client type: native
	I0926 17:57:59.633211    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b7dc00] 0x104b80440 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0926 17:57:59.633217    4114 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-937000 && echo "running-upgrade-937000" | sudo tee /etc/hostname
	I0926 17:57:59.697210    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-937000
	
	I0926 17:57:59.697274    4114 main.go:141] libmachine: Using SSH client type: native
	I0926 17:57:59.697408    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b7dc00] 0x104b80440 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0926 17:57:59.697416    4114 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-937000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-937000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-937000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 17:57:59.752587    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:57:59.752598    4114 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1075/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1075/.minikube}
	I0926 17:57:59.752607    4114 buildroot.go:174] setting up certificates
	I0926 17:57:59.752614    4114 provision.go:84] configureAuth start
	I0926 17:57:59.752621    4114 provision.go:143] copyHostCerts
	I0926 17:57:59.752680    4114 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.pem, removing ...
	I0926 17:57:59.752685    4114 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.pem
	I0926 17:57:59.752795    4114 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.pem (1078 bytes)
	I0926 17:57:59.752938    4114 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1075/.minikube/cert.pem, removing ...
	I0926 17:57:59.752941    4114 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1075/.minikube/cert.pem
	I0926 17:57:59.752990    4114 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/cert.pem (1123 bytes)
	I0926 17:57:59.753104    4114 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1075/.minikube/key.pem, removing ...
	I0926 17:57:59.753107    4114 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1075/.minikube/key.pem
	I0926 17:57:59.753149    4114 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/key.pem (1679 bytes)
	I0926 17:57:59.753233    4114 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-937000 san=[127.0.0.1 localhost minikube running-upgrade-937000]
	I0926 17:57:59.895570    4114 provision.go:177] copyRemoteCerts
	I0926 17:57:59.895625    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 17:57:59.895636    4114 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/running-upgrade-937000/id_rsa Username:docker}
	I0926 17:57:59.926857    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0926 17:57:59.933873    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0926 17:57:59.940324    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0926 17:57:59.947969    4114 provision.go:87] duration metric: took 195.350834ms to configureAuth
	I0926 17:57:59.947978    4114 buildroot.go:189] setting minikube options for container-runtime
	I0926 17:57:59.948101    4114 config.go:182] Loaded profile config "running-upgrade-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 17:57:59.948143    4114 main.go:141] libmachine: Using SSH client type: native
	I0926 17:57:59.948230    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b7dc00] 0x104b80440 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0926 17:57:59.948235    4114 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 17:58:00.006786    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 17:58:00.006796    4114 buildroot.go:70] root file system type: tmpfs
	I0926 17:58:00.006851    4114 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 17:58:00.006907    4114 main.go:141] libmachine: Using SSH client type: native
	I0926 17:58:00.007016    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b7dc00] 0x104b80440 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0926 17:58:00.007050    4114 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 17:58:00.068269    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 17:58:00.068331    4114 main.go:141] libmachine: Using SSH client type: native
	I0926 17:58:00.068454    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b7dc00] 0x104b80440 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0926 17:58:00.068466    4114 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 17:58:00.133950    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 17:58:00.133961    4114 machine.go:96] duration metric: took 561.220292ms to provisionDockerMachine
	I0926 17:58:00.133966    4114 start.go:293] postStartSetup for "running-upgrade-937000" (driver="qemu2")
	I0926 17:58:00.133972    4114 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 17:58:00.134036    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 17:58:00.134045    4114 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/running-upgrade-937000/id_rsa Username:docker}
	I0926 17:58:00.164276    4114 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 17:58:00.165509    4114 info.go:137] Remote host: Buildroot 2021.02.12
	I0926 17:58:00.165517    4114 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1075/.minikube/addons for local assets ...
	I0926 17:58:00.165582    4114 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1075/.minikube/files for local assets ...
	I0926 17:58:00.165693    4114 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem -> 15972.pem in /etc/ssl/certs
	I0926 17:58:00.165789    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 17:58:00.168248    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem --> /etc/ssl/certs/15972.pem (1708 bytes)
	I0926 17:58:00.176991    4114 start.go:296] duration metric: took 43.019125ms for postStartSetup
	I0926 17:58:00.177010    4114 fix.go:56] duration metric: took 617.653875ms for fixHost
	I0926 17:58:00.177060    4114 main.go:141] libmachine: Using SSH client type: native
	I0926 17:58:00.177180    4114 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b7dc00] 0x104b80440 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0926 17:58:00.177187    4114 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 17:58:00.237367    4114 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398680.117824013
	
	I0926 17:58:00.237376    4114 fix.go:216] guest clock: 1727398680.117824013
	I0926 17:58:00.237379    4114 fix.go:229] Guest: 2024-09-26 17:58:00.117824013 -0700 PDT Remote: 2024-09-26 17:58:00.177011 -0700 PDT m=+0.721258167 (delta=-59.186987ms)
	I0926 17:58:00.237395    4114 fix.go:200] guest clock delta is within tolerance: -59.186987ms
	I0926 17:58:00.237402    4114 start.go:83] releasing machines lock for "running-upgrade-937000", held for 678.054833ms
	I0926 17:58:00.237462    4114 ssh_runner.go:195] Run: cat /version.json
	I0926 17:58:00.237473    4114 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/running-upgrade-937000/id_rsa Username:docker}
	I0926 17:58:00.237462    4114 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 17:58:00.237500    4114 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/running-upgrade-937000/id_rsa Username:docker}
	W0926 17:58:00.237997    4114 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50252: connect: connection refused
	I0926 17:58:00.238016    4114 retry.go:31] will retry after 328.196643ms: dial tcp [::1]:50252: connect: connection refused
	W0926 17:58:00.601501    4114 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0926 17:58:00.601571    4114 ssh_runner.go:195] Run: systemctl --version
	I0926 17:58:00.603273    4114 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 17:58:00.604993    4114 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 17:58:00.605032    4114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0926 17:58:00.608054    4114 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0926 17:58:00.612517    4114 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 17:58:00.612529    4114 start.go:495] detecting cgroup driver to use...
	I0926 17:58:00.612613    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:58:00.617986    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0926 17:58:00.620986    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 17:58:00.624303    4114 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 17:58:00.624348    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 17:58:00.627782    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:58:00.631324    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 17:58:00.634706    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 17:58:00.637690    4114 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 17:58:00.640728    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 17:58:00.644371    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 17:58:00.647999    4114 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 17:58:00.651873    4114 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 17:58:00.655218    4114 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 17:58:00.658167    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:58:00.755674    4114 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 17:58:00.766947    4114 start.go:495] detecting cgroup driver to use...
	I0926 17:58:00.767031    4114 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 17:58:00.775284    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:58:00.779814    4114 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 17:58:00.791499    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 17:58:00.796295    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 17:58:00.801128    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 17:58:00.806427    4114 ssh_runner.go:195] Run: which cri-dockerd
	I0926 17:58:00.807734    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 17:58:00.810842    4114 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0926 17:58:00.815700    4114 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 17:58:00.907169    4114 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 17:58:01.000337    4114 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 17:58:01.000397    4114 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 17:58:01.005816    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:58:01.099128    4114 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:58:04.408988    4114 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.309936917s)
	I0926 17:58:04.409063    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 17:58:04.413395    4114 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 17:58:04.419796    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:58:04.426013    4114 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 17:58:04.519426    4114 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 17:58:04.600339    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:58:04.679715    4114 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 17:58:04.686243    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 17:58:04.690541    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:58:04.768525    4114 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 17:58:04.808219    4114 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 17:58:04.808302    4114 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 17:58:04.810545    4114 start.go:563] Will wait 60s for crictl version
	I0926 17:58:04.810601    4114 ssh_runner.go:195] Run: which crictl
	I0926 17:58:04.812085    4114 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 17:58:04.824184    4114 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0926 17:58:04.824272    4114 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:58:04.837235    4114 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 17:58:04.854628    4114 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0926 17:58:04.854763    4114 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0926 17:58:04.856041    4114 kubeadm.go:883] updating cluster {Name:running-upgrade-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50284 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0926 17:58:04.856084    4114 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0926 17:58:04.856131    4114 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:58:04.871283    4114 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 17:58:04.871290    4114 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0926 17:58:04.871336    4114 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0926 17:58:04.874492    4114 ssh_runner.go:195] Run: which lz4
	I0926 17:58:04.875834    4114 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 17:58:04.876924    4114 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 17:58:04.876934    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0926 17:58:05.847460    4114 docker.go:649] duration metric: took 971.691333ms to copy over tarball
	I0926 17:58:05.847521    4114 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 17:58:06.967509    4114 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.120006458s)
	I0926 17:58:06.967523    4114 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 17:58:06.983543    4114 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0926 17:58:06.986749    4114 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0926 17:58:06.991737    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:58:07.075082    4114 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 17:58:08.270545    4114 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.195480541s)
	I0926 17:58:08.270654    4114 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 17:58:08.284453    4114 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 17:58:08.284463    4114 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0926 17:58:08.284468    4114 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0926 17:58:08.289038    4114 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 17:58:08.291268    4114 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0926 17:58:08.293306    4114 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0926 17:58:08.293533    4114 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 17:58:08.295338    4114 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 17:58:08.295529    4114 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0926 17:58:08.296846    4114 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0926 17:58:08.296875    4114 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0926 17:58:08.298158    4114 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0926 17:58:08.298245    4114 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 17:58:08.300258    4114 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0926 17:58:08.300276    4114 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0926 17:58:08.301482    4114 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0926 17:58:08.301478    4114 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0926 17:58:08.302705    4114 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0926 17:58:08.303236    4114 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0926 17:58:08.757209    4114 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0926 17:58:08.761407    4114 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 17:58:08.776445    4114 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0926 17:58:08.776478    4114 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0926 17:58:08.776550    4114 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0926 17:58:08.786851    4114 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0926 17:58:08.786870    4114 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 17:58:08.786929    4114 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 17:58:08.786936    4114 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	W0926 17:58:08.792194    4114 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0926 17:58:08.792327    4114 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0926 17:58:08.794128    4114 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0926 17:58:08.797594    4114 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0926 17:58:08.801113    4114 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0926 17:58:08.801135    4114 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0926 17:58:08.801195    4114 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0926 17:58:08.811686    4114 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0926 17:58:08.822170    4114 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0926 17:58:08.822200    4114 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0926 17:58:08.822217    4114 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0926 17:58:08.822228    4114 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0926 17:58:08.822268    4114 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0926 17:58:08.822269    4114 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0926 17:58:08.823527    4114 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0926 17:58:08.827175    4114 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0926 17:58:08.836172    4114 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0926 17:58:08.836234    4114 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0926 17:58:08.836172    4114 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0926 17:58:08.836301    4114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0926 17:58:08.848665    4114 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0926 17:58:08.848688    4114 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0926 17:58:08.848762    4114 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0926 17:58:08.851063    4114 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0926 17:58:08.851084    4114 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0926 17:58:08.851106    4114 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0926 17:58:08.851122    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0926 17:58:08.851136    4114 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0926 17:58:08.887200    4114 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0926 17:58:08.887343    4114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0926 17:58:08.887914    4114 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0926 17:58:08.887973    4114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0926 17:58:08.900413    4114 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0926 17:58:08.900440    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0926 17:58:08.912587    4114 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0926 17:58:08.912600    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0926 17:58:08.914330    4114 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0926 17:58:08.914357    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0926 17:58:09.035426    4114 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0926 17:58:09.035441    4114 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0926 17:58:09.035448    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0926 17:58:09.112169    4114 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0926 17:58:09.184191    4114 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0926 17:58:09.184204    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0926 17:58:09.229027    4114 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0926 17:58:09.229209    4114 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 17:58:09.331021    4114 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0926 17:58:09.331059    4114 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0926 17:58:09.331082    4114 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 17:58:09.331148    4114 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 17:58:10.304791    4114 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0926 17:58:10.305086    4114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0926 17:58:10.308927    4114 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0926 17:58:10.308963    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0926 17:58:10.365849    4114 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0926 17:58:10.365863    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0926 17:58:10.601173    4114 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0926 17:58:10.601214    4114 cache_images.go:92] duration metric: took 2.316804875s to LoadCachedImages
	W0926 17:58:10.601248    4114 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0926 17:58:10.601252    4114 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0926 17:58:10.601296    4114 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-937000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 17:58:10.601365    4114 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 17:58:10.614639    4114 cni.go:84] Creating CNI manager for ""
	I0926 17:58:10.614651    4114 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:58:10.614656    4114 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 17:58:10.614668    4114 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-937000 NodeName:running-upgrade-937000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 17:58:10.614734    4114 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-937000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 17:58:10.614798    4114 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0926 17:58:10.617685    4114 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 17:58:10.617719    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 17:58:10.620247    4114 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0926 17:58:10.625655    4114 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 17:58:10.630404    4114 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0926 17:58:10.635831    4114 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0926 17:58:10.637435    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 17:58:10.716241    4114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 17:58:10.721400    4114 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000 for IP: 10.0.2.15
	I0926 17:58:10.721411    4114 certs.go:194] generating shared ca certs ...
	I0926 17:58:10.721420    4114 certs.go:226] acquiring lock for ca certs: {Name:mk27a718ead98149a4ca4d0cc52012d8aa60b9f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:58:10.721579    4114 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.key
	I0926 17:58:10.721628    4114 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.key
	I0926 17:58:10.721636    4114 certs.go:256] generating profile certs ...
	I0926 17:58:10.721716    4114 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/client.key
	I0926 17:58:10.721735    4114 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.key.c9c8d9c1
	I0926 17:58:10.721745    4114 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.crt.c9c8d9c1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0926 17:58:10.839174    4114 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.crt.c9c8d9c1 ...
	I0926 17:58:10.839183    4114 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.crt.c9c8d9c1: {Name:mk9315d5c665e1a31075cebfbbfdaa046e369250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:58:10.839442    4114 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.key.c9c8d9c1 ...
	I0926 17:58:10.839448    4114 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.key.c9c8d9c1: {Name:mk10c1a72973da0417add44b90f0cc8d26379f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:58:10.839603    4114 certs.go:381] copying /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.crt.c9c8d9c1 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.crt
	I0926 17:58:10.839961    4114 certs.go:385] copying /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.key.c9c8d9c1 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.key
	I0926 17:58:10.840147    4114 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/proxy-client.key
	I0926 17:58:10.840276    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/1597.pem (1338 bytes)
	W0926 17:58:10.840306    4114 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/1597_empty.pem, impossibly tiny 0 bytes
	I0926 17:58:10.840312    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 17:58:10.840339    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem (1078 bytes)
	I0926 17:58:10.840364    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem (1123 bytes)
	I0926 17:58:10.840387    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem (1679 bytes)
	I0926 17:58:10.840439    4114 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem (1708 bytes)
	I0926 17:58:10.840792    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 17:58:10.848661    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 17:58:10.855871    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 17:58:10.863582    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 17:58:10.870722    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0926 17:58:10.877958    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 17:58:10.885591    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 17:58:10.892161    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 17:58:10.899638    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 17:58:10.907314    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/1597.pem --> /usr/share/ca-certificates/1597.pem (1338 bytes)
	I0926 17:58:10.915346    4114 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem --> /usr/share/ca-certificates/15972.pem (1708 bytes)
	I0926 17:58:10.922341    4114 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 17:58:10.927552    4114 ssh_runner.go:195] Run: openssl version
	I0926 17:58:10.929513    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 17:58:10.932544    4114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:58:10.934115    4114 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:58:10.934148    4114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 17:58:10.935786    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 17:58:10.939068    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1597.pem && ln -fs /usr/share/ca-certificates/1597.pem /etc/ssl/certs/1597.pem"
	I0926 17:58:10.942132    4114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1597.pem
	I0926 17:58:10.943481    4114 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:28 /usr/share/ca-certificates/1597.pem
	I0926 17:58:10.943508    4114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1597.pem
	I0926 17:58:10.945479    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1597.pem /etc/ssl/certs/51391683.0"
	I0926 17:58:10.948337    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15972.pem && ln -fs /usr/share/ca-certificates/15972.pem /etc/ssl/certs/15972.pem"
	I0926 17:58:10.951621    4114 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15972.pem
	I0926 17:58:10.953296    4114 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:28 /usr/share/ca-certificates/15972.pem
	I0926 17:58:10.953317    4114 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15972.pem
	I0926 17:58:10.955293    4114 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15972.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 17:58:10.958458    4114 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 17:58:10.959963    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 17:58:10.962196    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 17:58:10.963928    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 17:58:10.965745    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 17:58:10.967740    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 17:58:10.969761    4114 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 17:58:10.971438    4114 kubeadm.go:392] StartCluster: {Name:running-upgrade-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50284 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-937000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0926 17:58:10.971513    4114 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 17:58:10.984802    4114 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 17:58:10.988369    4114 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0926 17:58:10.988375    4114 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0926 17:58:10.988406    4114 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 17:58:10.991816    4114 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:58:10.992048    4114 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-937000" does not appear in /Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:58:10.992095    4114 kubeconfig.go:62] /Users/jenkins/minikube-integration/19711-1075/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-937000" cluster setting kubeconfig missing "running-upgrade-937000" context setting]
	I0926 17:58:10.992236    4114 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/kubeconfig: {Name:mk9560fb3377d007cf139de436457ca7aa0f8d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:58:10.992815    4114 kapi.go:59] client config for running-upgrade-937000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106156570), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 17:58:10.993137    4114 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 17:58:10.996088    4114 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-937000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0926 17:58:10.996093    4114 kubeadm.go:1160] stopping kube-system containers ...
	I0926 17:58:10.996145    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 17:58:11.007495    4114 docker.go:483] Stopping containers: [801d2cd47b84 21c0cbfce612 8624e6cc00e0 6536b1c9a022 12be6493637c e6820f61ec98 936423c2e273 6ebd37f8910f 565503b38a6a 970dc311cd9a a11879cd8b3c 84dcabbe2a2f]
	I0926 17:58:11.007570    4114 ssh_runner.go:195] Run: docker stop 801d2cd47b84 21c0cbfce612 8624e6cc00e0 6536b1c9a022 12be6493637c e6820f61ec98 936423c2e273 6ebd37f8910f 565503b38a6a 970dc311cd9a a11879cd8b3c 84dcabbe2a2f
	I0926 17:58:11.018936    4114 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0926 17:58:11.122197    4114 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 17:58:11.126664    4114 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Sep 27 00:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 27 00:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 27 00:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 27 00:57 /etc/kubernetes/scheduler.conf
	
	I0926 17:58:11.126709    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/admin.conf
	I0926 17:58:11.130745    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:58:11.130780    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 17:58:11.134620    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/kubelet.conf
	I0926 17:58:11.137840    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:58:11.137867    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 17:58:11.140953    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/controller-manager.conf
	I0926 17:58:11.143955    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:58:11.143977    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 17:58:11.147332    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/scheduler.conf
	I0926 17:58:11.150511    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0926 17:58:11.150535    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 17:58:11.153782    4114 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 17:58:11.156662    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 17:58:11.176984    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 17:58:11.948720    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0926 17:58:12.150689    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 17:58:12.172900    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0926 17:58:12.201602    4114 api_server.go:52] waiting for apiserver process to appear ...
	I0926 17:58:12.201688    4114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:58:12.703988    4114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:58:13.203786    4114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 17:58:13.208571    4114 api_server.go:72] duration metric: took 1.006999333s to wait for apiserver process to appear ...
	I0926 17:58:13.208580    4114 api_server.go:88] waiting for apiserver healthz status ...
	I0926 17:58:13.208594    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:58:18.210576    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:58:18.210670    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:58:23.211098    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:58:23.211138    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:58:28.211516    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:58:28.211623    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:58:33.212714    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:58:33.212809    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:58:38.214196    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:58:38.214295    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:58:43.216145    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:58:43.216281    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:58:48.218635    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:58:48.218695    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:58:53.221063    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:58:53.221142    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:58:58.223513    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:58:58.223590    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:59:03.225454    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:59:03.225545    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:59:08.228071    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:59:08.228121    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:59:13.229049    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:59:13.229583    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:59:13.271562    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 17:59:13.271780    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:59:13.300499    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 17:59:13.300621    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:59:13.314946    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 17:59:13.315025    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:59:13.327316    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 17:59:13.327404    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:59:13.337885    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 17:59:13.337953    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:59:13.349066    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 17:59:13.349148    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:59:13.359034    4114 logs.go:276] 0 containers: []
	W0926 17:59:13.359045    4114 logs.go:278] No container was found matching "kindnet"
	I0926 17:59:13.359109    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 17:59:13.370029    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 17:59:13.370047    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 17:59:13.370053    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 17:59:13.381397    4114 logs.go:123] Gathering logs for Docker ...
	I0926 17:59:13.381407    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:59:13.407765    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 17:59:13.407775    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:59:13.411846    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:59:13.411853    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:59:13.483827    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 17:59:13.483840    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 17:59:13.512329    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 17:59:13.512341    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 17:59:13.526567    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 17:59:13.526584    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:59:13.561297    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 17:59:13.561304    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 17:59:13.575228    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 17:59:13.575240    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 17:59:13.597910    4114 logs.go:123] Gathering logs for container status ...
	I0926 17:59:13.597921    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:59:13.611909    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 17:59:13.611921    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 17:59:13.624788    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 17:59:13.624800    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 17:59:13.643016    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 17:59:13.643026    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 17:59:13.654561    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 17:59:13.654571    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 17:59:13.665630    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 17:59:13.665640    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 17:59:13.680122    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 17:59:13.680131    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 17:59:13.691500    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 17:59:13.691510    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 17:59:16.209003    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:59:21.209775    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:59:21.210277    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:59:21.252931    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 17:59:21.253071    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:59:21.270287    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 17:59:21.270395    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:59:21.284219    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 17:59:21.284320    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:59:21.296260    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 17:59:21.296356    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:59:21.306587    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 17:59:21.306670    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:59:21.317037    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 17:59:21.317107    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:59:21.327246    4114 logs.go:276] 0 containers: []
	W0926 17:59:21.327256    4114 logs.go:278] No container was found matching "kindnet"
	I0926 17:59:21.327327    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 17:59:21.337833    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 17:59:21.337848    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 17:59:21.337852    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:59:21.374414    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 17:59:21.374420    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 17:59:21.387358    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 17:59:21.387368    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 17:59:21.398604    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:59:21.398613    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:59:21.434122    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 17:59:21.434135    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 17:59:21.448094    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 17:59:21.448103    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 17:59:21.473042    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 17:59:21.473054    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 17:59:21.487167    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 17:59:21.487178    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 17:59:21.504789    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 17:59:21.504798    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 17:59:21.515789    4114 logs.go:123] Gathering logs for Docker ...
	I0926 17:59:21.515799    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:59:21.542126    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 17:59:21.542135    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:59:21.546123    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 17:59:21.546131    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 17:59:21.560308    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 17:59:21.560317    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 17:59:21.570908    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 17:59:21.570917    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 17:59:21.582322    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 17:59:21.582333    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 17:59:21.598251    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 17:59:21.598264    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 17:59:21.609526    4114 logs.go:123] Gathering logs for container status ...
	I0926 17:59:21.609535    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:59:24.123377    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:59:29.126446    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:59:29.127038    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:59:29.166468    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 17:59:29.166627    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:59:29.193203    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 17:59:29.193332    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:59:29.208056    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 17:59:29.208162    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:59:29.220676    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 17:59:29.220762    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:59:29.231365    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 17:59:29.231436    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:59:29.241816    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 17:59:29.241898    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:59:29.252110    4114 logs.go:276] 0 containers: []
	W0926 17:59:29.252124    4114 logs.go:278] No container was found matching "kindnet"
	I0926 17:59:29.252181    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 17:59:29.262672    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 17:59:29.262690    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 17:59:29.262695    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:59:29.267460    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 17:59:29.267469    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 17:59:29.283918    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 17:59:29.283927    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 17:59:29.296763    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 17:59:29.296773    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 17:59:29.314648    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 17:59:29.314657    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 17:59:29.326063    4114 logs.go:123] Gathering logs for container status ...
	I0926 17:59:29.326073    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:59:29.338926    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 17:59:29.338936    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 17:59:29.353188    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 17:59:29.353197    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 17:59:29.366281    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 17:59:29.366289    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 17:59:29.384390    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 17:59:29.384400    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 17:59:29.400598    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 17:59:29.400607    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:59:29.435743    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:59:29.435749    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:59:29.472174    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 17:59:29.472184    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 17:59:29.496779    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 17:59:29.496789    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 17:59:29.512258    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 17:59:29.512269    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 17:59:29.524103    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 17:59:29.524117    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 17:59:29.540940    4114 logs.go:123] Gathering logs for Docker ...
	I0926 17:59:29.540951    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:59:32.069228    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:59:37.071756    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:59:37.071932    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:59:37.091068    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 17:59:37.091171    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:59:37.102947    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 17:59:37.103032    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:59:37.113313    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 17:59:37.113390    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:59:37.123969    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 17:59:37.124048    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:59:37.138021    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 17:59:37.138096    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:59:37.148580    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 17:59:37.148655    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:59:37.158738    4114 logs.go:276] 0 containers: []
	W0926 17:59:37.158751    4114 logs.go:278] No container was found matching "kindnet"
	I0926 17:59:37.158818    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 17:59:37.171718    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 17:59:37.171735    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 17:59:37.171740    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 17:59:37.196194    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 17:59:37.196204    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 17:59:37.211581    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 17:59:37.211590    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 17:59:37.223991    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 17:59:37.224001    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 17:59:37.234965    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:59:37.234975    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:59:37.270642    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 17:59:37.270657    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 17:59:37.283581    4114 logs.go:123] Gathering logs for Docker ...
	I0926 17:59:37.283589    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:59:37.309477    4114 logs.go:123] Gathering logs for container status ...
	I0926 17:59:37.309485    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:59:37.320895    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 17:59:37.320906    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:59:37.326188    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 17:59:37.326200    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 17:59:37.344184    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 17:59:37.344194    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 17:59:37.360027    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 17:59:37.360037    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 17:59:37.378068    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 17:59:37.378082    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 17:59:37.389377    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 17:59:37.389387    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:59:37.423838    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 17:59:37.423846    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 17:59:37.447774    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 17:59:37.447783    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 17:59:37.459871    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 17:59:37.459880    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 17:59:39.979317    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:59:44.980700    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:59:44.981299    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:59:45.020746    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 17:59:45.020912    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:59:45.041736    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 17:59:45.041850    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:59:45.058304    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 17:59:45.058398    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:59:45.070930    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 17:59:45.071018    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:59:45.081961    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 17:59:45.082034    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:59:45.093146    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 17:59:45.093217    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:59:45.103423    4114 logs.go:276] 0 containers: []
	W0926 17:59:45.103437    4114 logs.go:278] No container was found matching "kindnet"
	I0926 17:59:45.103505    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 17:59:45.118984    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 17:59:45.119003    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 17:59:45.119008    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 17:59:45.136518    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 17:59:45.136530    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 17:59:45.147505    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 17:59:45.147515    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:59:45.152087    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:59:45.152095    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:59:45.193083    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 17:59:45.193096    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 17:59:45.207894    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 17:59:45.207906    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 17:59:45.219236    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 17:59:45.219250    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 17:59:45.230758    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 17:59:45.230767    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 17:59:45.242276    4114 logs.go:123] Gathering logs for Docker ...
	I0926 17:59:45.242285    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:59:45.269092    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 17:59:45.269103    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:59:45.304236    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 17:59:45.304245    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 17:59:45.321937    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 17:59:45.321946    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 17:59:45.346259    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 17:59:45.346274    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 17:59:45.359824    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 17:59:45.359833    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 17:59:45.371006    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 17:59:45.371017    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 17:59:45.385147    4114 logs.go:123] Gathering logs for container status ...
	I0926 17:59:45.385157    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:59:45.397197    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 17:59:45.397205    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 17:59:47.913230    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 17:59:52.915884    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 17:59:52.916453    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 17:59:52.956494    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 17:59:52.956661    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 17:59:52.979183    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 17:59:52.979295    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 17:59:52.994320    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 17:59:52.994410    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 17:59:53.008855    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 17:59:53.008942    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 17:59:53.019732    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 17:59:53.019820    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 17:59:53.032464    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 17:59:53.032539    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 17:59:53.043375    4114 logs.go:276] 0 containers: []
	W0926 17:59:53.043389    4114 logs.go:278] No container was found matching "kindnet"
	I0926 17:59:53.043456    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 17:59:53.054973    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 17:59:53.054990    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 17:59:53.054995    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 17:59:53.088552    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 17:59:53.088561    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 17:59:53.102746    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 17:59:53.102756    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 17:59:53.114275    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 17:59:53.114289    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 17:59:53.125962    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 17:59:53.125971    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 17:59:53.137657    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 17:59:53.137667    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 17:59:53.149671    4114 logs.go:123] Gathering logs for Docker ...
	I0926 17:59:53.149681    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 17:59:53.175845    4114 logs.go:123] Gathering logs for container status ...
	I0926 17:59:53.175855    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 17:59:53.187220    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 17:59:53.187232    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 17:59:53.191775    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 17:59:53.191784    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 17:59:53.216523    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 17:59:53.216532    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 17:59:53.233372    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 17:59:53.233380    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 17:59:53.247031    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 17:59:53.247041    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 17:59:53.260239    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 17:59:53.260250    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 17:59:53.297414    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 17:59:53.297423    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 17:59:53.311508    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 17:59:53.311522    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 17:59:53.326066    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 17:59:53.326078    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 17:59:55.845119    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:00:00.847503    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:00:00.847896    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:00:00.884256    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:00:00.884436    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:00:00.905356    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:00:00.905488    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:00:00.924874    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:00:00.924976    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:00:00.937499    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:00:00.937589    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:00:00.951988    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:00:00.952080    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:00:00.962732    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:00:00.962817    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:00:00.978298    4114 logs.go:276] 0 containers: []
	W0926 18:00:00.978311    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:00:00.978393    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:00:00.988921    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:00:00.988939    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:00:00.988944    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:00:01.014375    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:00:01.014389    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:00:01.031821    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:00:01.031832    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:00:01.043104    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:00:01.043113    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:00:01.059264    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:00:01.059273    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:00:01.083933    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:00:01.083949    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:00:01.096911    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:00:01.096921    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:00:01.116028    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:00:01.116040    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:00:01.128699    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:00:01.128713    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:00:01.141912    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:00:01.141923    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:00:01.147124    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:00:01.147137    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:00:01.183412    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:00:01.183421    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:00:01.207432    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:00:01.207444    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:00:01.221072    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:00:01.221085    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:00:01.236768    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:00:01.236780    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:00:01.249177    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:00:01.249185    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:00:01.264781    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:00:01.264789    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:00:03.803172    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:00:08.805452    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:00:08.805788    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:00:08.849261    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:00:08.849378    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:00:08.869899    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:00:08.869998    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:00:08.885359    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:00:08.885463    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:00:08.898758    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:00:08.898852    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:00:08.911324    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:00:08.911409    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:00:08.923572    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:00:08.923650    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:00:08.935050    4114 logs.go:276] 0 containers: []
	W0926 18:00:08.935064    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:00:08.935137    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:00:08.949889    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:00:08.949908    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:00:08.949919    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:00:08.962898    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:00:08.962913    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:00:08.979521    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:00:08.979533    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:00:08.998303    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:00:08.998318    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:00:09.011655    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:00:09.011668    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:00:09.033509    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:00:09.033525    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:00:09.046597    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:00:09.046609    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:00:09.061262    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:00:09.061272    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:00:09.076135    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:00:09.076146    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:00:09.113138    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:00:09.113151    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:00:09.127346    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:00:09.127357    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:00:09.139743    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:00:09.139753    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:00:09.165948    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:00:09.165965    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:00:09.177323    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:00:09.177336    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:00:09.212863    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:00:09.212871    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:00:09.217018    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:00:09.217025    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:00:09.242108    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:00:09.242117    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:00:11.755815    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:00:16.758418    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:00:16.758998    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:00:16.800897    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:00:16.801051    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:00:16.820740    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:00:16.820841    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:00:16.835885    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:00:16.835967    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:00:16.848071    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:00:16.848157    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:00:16.858548    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:00:16.858631    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:00:16.869990    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:00:16.870069    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:00:16.880339    4114 logs.go:276] 0 containers: []
	W0926 18:00:16.880349    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:00:16.880410    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:00:16.891314    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:00:16.891330    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:00:16.891335    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:00:16.896136    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:00:16.896142    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:00:16.907495    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:00:16.907505    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:00:16.932071    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:00:16.932078    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:00:16.943600    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:00:16.943609    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:00:16.969154    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:00:16.969163    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:00:16.983224    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:00:16.983232    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:00:17.001255    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:00:17.001266    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:00:17.012457    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:00:17.012470    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:00:17.033534    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:00:17.033544    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:00:17.044943    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:00:17.044956    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:00:17.055776    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:00:17.055784    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:00:17.069601    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:00:17.069612    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:00:17.084932    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:00:17.084941    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:00:17.096863    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:00:17.096872    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:00:17.112702    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:00:17.112714    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:00:17.149782    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:00:17.149791    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:00:19.686452    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:00:24.689070    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:00:24.689678    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:00:24.728608    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:00:24.728776    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:00:24.750960    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:00:24.751094    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:00:24.766163    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:00:24.766251    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:00:24.778903    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:00:24.778975    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:00:24.790210    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:00:24.790296    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:00:24.800918    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:00:24.800998    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:00:24.817115    4114 logs.go:276] 0 containers: []
	W0926 18:00:24.817125    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:00:24.817187    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:00:24.827828    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:00:24.827850    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:00:24.827856    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:00:24.839514    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:00:24.839524    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:00:24.863327    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:00:24.863338    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:00:24.892402    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:00:24.892413    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:00:24.907158    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:00:24.907168    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:00:24.925915    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:00:24.925926    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:00:24.937124    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:00:24.937134    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:00:24.948943    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:00:24.948953    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:00:24.985253    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:00:24.985261    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:00:25.019272    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:00:25.019281    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:00:25.033836    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:00:25.033845    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:00:25.045668    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:00:25.045678    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:00:25.058013    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:00:25.058021    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:00:25.070615    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:00:25.070627    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:00:25.082583    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:00:25.082592    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:00:25.086968    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:00:25.086975    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:00:25.100778    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:00:25.100787    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:00:27.618959    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:00:32.621107    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:00:32.621249    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:00:32.633110    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:00:32.633196    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:00:32.644988    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:00:32.645074    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:00:32.657523    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:00:32.657610    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:00:32.670678    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:00:32.670776    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:00:32.689025    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:00:32.689098    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:00:32.701082    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:00:32.701172    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:00:32.712747    4114 logs.go:276] 0 containers: []
	W0926 18:00:32.712759    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:00:32.712838    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:00:32.724554    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:00:32.724574    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:00:32.724583    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:00:32.729230    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:00:32.729249    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:00:32.745330    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:00:32.745344    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:00:32.772963    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:00:32.772977    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:00:32.789093    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:00:32.789106    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:00:32.821246    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:00:32.821264    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:00:32.838501    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:00:32.838524    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:00:32.854443    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:00:32.854456    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:00:32.868679    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:00:32.868695    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:00:32.910899    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:00:32.910919    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:00:32.928837    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:00:32.928847    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:00:32.943852    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:00:32.943863    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:00:32.959788    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:00:32.959799    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:00:32.978828    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:00:32.978837    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:00:33.015365    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:00:33.015376    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:00:33.027243    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:00:33.027253    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:00:33.038542    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:00:33.038554    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:00:35.561127    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:00:40.563739    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:00:40.564220    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:00:40.595946    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:00:40.596101    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:00:40.616196    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:00:40.616312    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:00:40.630427    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:00:40.630520    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:00:40.648358    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:00:40.648451    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:00:40.660641    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:00:40.660711    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:00:40.671313    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:00:40.671397    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:00:40.681732    4114 logs.go:276] 0 containers: []
	W0926 18:00:40.681742    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:00:40.681802    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:00:40.692578    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:00:40.692600    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:00:40.692606    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:00:40.708957    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:00:40.708968    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:00:40.726496    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:00:40.726513    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:00:40.738156    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:00:40.738167    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:00:40.763626    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:00:40.763633    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:00:40.777012    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:00:40.777022    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:00:40.791037    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:00:40.791048    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:00:40.802535    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:00:40.802547    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:00:40.814039    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:00:40.814049    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:00:40.825259    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:00:40.825273    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:00:40.865432    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:00:40.865441    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:00:40.879038    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:00:40.879050    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:00:40.903741    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:00:40.903751    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:00:40.940461    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:00:40.940473    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:00:40.944603    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:00:40.944611    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:00:40.959427    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:00:40.959438    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:00:40.977592    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:00:40.977602    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:00:43.493470    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:00:48.495521    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:00:48.495649    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:00:48.506965    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:00:48.507052    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:00:48.517755    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:00:48.517839    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:00:48.528538    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:00:48.528615    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:00:48.538904    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:00:48.538985    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:00:48.549791    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:00:48.549874    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:00:48.561208    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:00:48.561299    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:00:48.572905    4114 logs.go:276] 0 containers: []
	W0926 18:00:48.572916    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:00:48.573004    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:00:48.584008    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:00:48.584032    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:00:48.584037    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:00:48.597098    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:00:48.597110    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:00:48.613104    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:00:48.613117    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:00:48.625879    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:00:48.625891    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:00:48.651841    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:00:48.651855    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:00:48.656605    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:00:48.656615    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:00:48.672155    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:00:48.672168    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:00:48.698488    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:00:48.698507    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:00:48.711651    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:00:48.711665    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:00:48.725607    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:00:48.725619    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:00:48.739967    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:00:48.739979    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:00:48.757474    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:00:48.757491    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:00:48.775824    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:00:48.775838    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:00:48.788952    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:00:48.788963    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:00:48.827944    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:00:48.827969    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:00:48.864576    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:00:48.864586    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:00:48.882203    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:00:48.882213    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:00:51.396410    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:00:56.398520    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:00:56.398663    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:00:56.410916    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:00:56.411014    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:00:56.422573    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:00:56.422651    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:00:56.434289    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:00:56.434368    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:00:56.445803    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:00:56.445886    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:00:56.457282    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:00:56.457363    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:00:56.468790    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:00:56.468871    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:00:56.479972    4114 logs.go:276] 0 containers: []
	W0926 18:00:56.479985    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:00:56.480058    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:00:56.491577    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:00:56.491598    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:00:56.491604    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:00:56.503990    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:00:56.504001    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:00:56.541308    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:00:56.541330    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:00:56.568164    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:00:56.568179    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:00:56.582934    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:00:56.582951    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:00:56.595496    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:00:56.595507    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:00:56.614288    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:00:56.614305    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:00:56.626891    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:00:56.626903    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:00:56.652945    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:00:56.652964    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:00:56.658219    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:00:56.658227    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:00:56.696318    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:00:56.696333    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:00:56.716458    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:00:56.716469    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:00:56.728766    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:00:56.728777    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:00:56.745144    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:00:56.745155    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:00:56.761134    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:00:56.761147    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:00:56.776858    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:00:56.776870    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:00:56.792661    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:00:56.792678    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:00:59.307675    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:04.308623    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:04.308900    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:04.337135    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:04.337277    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:04.359350    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:04.359448    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:04.371607    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:04.371688    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:04.383337    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:04.383412    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:04.393566    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:04.393636    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:04.404466    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:04.404545    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:04.417249    4114 logs.go:276] 0 containers: []
	W0926 18:01:04.417259    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:04.417325    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:04.427533    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:04.427551    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:04.427556    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:04.439014    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:04.439026    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:04.450917    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:04.450925    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:04.486196    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:04.486206    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:04.500403    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:04.500417    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:04.511903    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:04.511912    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:04.529441    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:04.529452    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:04.541064    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:04.541075    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:04.565922    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:04.565932    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:04.580184    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:04.580194    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:04.592033    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:04.592048    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:04.603663    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:04.603679    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:04.614735    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:04.614745    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:04.639112    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:04.639122    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:04.675961    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:04.675967    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:04.680714    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:04.680724    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:04.694767    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:04.694777    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:07.211169    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:12.213297    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:12.213500    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:12.227505    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:12.227602    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:12.239200    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:12.239273    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:12.249790    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:12.249868    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:12.260327    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:12.260399    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:12.276076    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:12.276166    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:12.286348    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:12.286428    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:12.296989    4114 logs.go:276] 0 containers: []
	W0926 18:01:12.297001    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:12.297072    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:12.307890    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:12.307909    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:12.307914    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:12.322480    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:12.322493    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:12.339796    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:12.339813    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:12.352117    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:12.352133    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:12.364023    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:12.364037    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:12.389038    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:12.389056    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:12.426205    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:12.426215    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:12.430981    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:12.430990    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:12.465538    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:12.465549    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:12.492543    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:12.492554    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:12.507927    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:12.507938    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:12.524012    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:12.524024    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:12.542169    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:12.542181    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:12.556805    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:12.556819    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:12.571145    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:12.571156    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:12.585932    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:12.585950    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:12.597217    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:12.597227    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:15.110447    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:20.112959    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:20.113094    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:20.124355    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:20.124446    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:20.135575    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:20.135662    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:20.146576    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:20.146655    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:20.157681    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:20.157764    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:20.168475    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:20.168556    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:20.179116    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:20.179193    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:20.189237    4114 logs.go:276] 0 containers: []
	W0926 18:01:20.189248    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:20.189315    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:20.199960    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:20.199978    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:20.199984    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:20.211250    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:20.211261    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:20.223252    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:20.223262    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:20.246841    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:20.246850    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:20.258818    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:20.258829    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:20.275346    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:20.275356    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:20.286861    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:20.286871    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:20.322452    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:20.322462    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:20.334589    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:20.334601    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:20.349353    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:20.349368    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:20.354306    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:20.354312    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:20.383185    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:20.383196    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:20.397132    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:20.397142    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:20.413499    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:20.413511    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:20.428991    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:20.429001    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:20.446428    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:20.446438    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:20.457805    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:20.457815    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:22.997836    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:28.000016    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:28.000210    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:28.012554    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:28.012647    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:28.023797    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:28.023876    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:28.034159    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:28.034244    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:28.046513    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:28.046605    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:28.062718    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:28.062802    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:28.073647    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:28.073730    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:28.091164    4114 logs.go:276] 0 containers: []
	W0926 18:01:28.091176    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:28.091245    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:28.111888    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:28.111907    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:28.111912    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:28.150626    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:28.150634    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:28.166137    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:28.166149    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:28.178691    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:28.178702    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:28.199260    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:28.199270    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:28.210844    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:28.210854    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:28.234067    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:28.234075    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:28.249950    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:28.249964    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:28.267887    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:28.267900    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:28.281012    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:28.281024    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:28.292782    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:28.292795    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:28.304805    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:28.304818    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:28.318227    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:28.318239    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:28.346360    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:28.346375    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:28.361521    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:28.361536    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:28.374710    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:28.374725    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:28.379133    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:28.379140    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:30.916076    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:35.918755    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:35.919126    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:35.953025    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:35.953194    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:35.973173    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:35.973273    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:35.987813    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:35.987903    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:36.000434    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:36.000519    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:36.010971    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:36.011038    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:36.021411    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:36.021488    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:36.032365    4114 logs.go:276] 0 containers: []
	W0926 18:01:36.032377    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:36.032445    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:36.047341    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:36.047357    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:36.047363    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:36.087793    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:36.087803    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:36.118218    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:36.118233    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:36.147607    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:36.147621    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:36.167506    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:36.167520    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:36.178471    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:36.178483    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:36.201912    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:36.201921    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:36.237659    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:36.237672    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:36.242310    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:36.242317    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:36.267598    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:36.267609    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:36.280323    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:36.280336    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:36.291395    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:36.291407    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:36.305136    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:36.305150    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:36.316559    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:36.316568    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:36.328814    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:36.328827    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:36.344052    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:36.344062    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:36.362980    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:36.362990    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:38.875857    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:43.877924    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:43.878045    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:43.890173    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:43.890261    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:43.902156    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:43.902240    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:43.913833    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:43.913918    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:43.925993    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:43.926084    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:43.937429    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:43.937526    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:43.950024    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:43.950108    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:43.966108    4114 logs.go:276] 0 containers: []
	W0926 18:01:43.966120    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:43.966195    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:43.979432    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:43.979450    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:43.979456    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:43.996939    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:43.996953    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:44.009969    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:44.009984    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:44.025338    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:44.025351    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:44.038220    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:44.038232    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:44.050744    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:44.050757    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:44.075075    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:44.075091    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:44.092736    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:44.092749    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:44.132094    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:44.132121    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:44.148632    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:44.148647    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:44.175721    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:44.175738    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:44.192371    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:44.192383    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:44.204590    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:44.204606    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:44.217021    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:44.217034    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:44.221697    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:44.221706    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:44.234808    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:44.234821    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:44.253427    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:44.253441    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:46.791907    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:51.794042    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:51.794389    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:51.822315    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:51.822457    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:51.839536    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:51.839634    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:51.853404    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:51.853486    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:51.865316    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:51.865398    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:51.875821    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:51.875909    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:51.886545    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:51.886628    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:51.896418    4114 logs.go:276] 0 containers: []
	W0926 18:01:51.896433    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:51.896494    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:51.906980    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:51.906998    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:51.907003    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:51.924210    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:51.924227    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:51.941027    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:51.941038    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:51.953209    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:51.953220    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:51.965479    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:51.965489    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:51.977207    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:51.977217    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:51.999756    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:51.999764    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:52.013854    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:52.013865    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:52.018148    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:52.018155    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:52.053488    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:52.053499    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:52.078429    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:52.078439    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:52.094421    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:52.094436    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:52.129589    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:52.129597    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:52.145363    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:52.145376    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:52.161919    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:52.161931    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:52.178338    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:52.178349    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:52.191004    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:52.191013    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:54.704775    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:59.705397    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:59.705497    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:59.720155    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:59.720236    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:59.731324    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:59.731413    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:59.746671    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:59.746758    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:59.757199    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:59.757288    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:59.769130    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:59.769212    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:59.779676    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:59.779759    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:59.793564    4114 logs.go:276] 0 containers: []
	W0926 18:01:59.793576    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:59.793649    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:59.804330    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:59.804347    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:59.804352    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:59.825641    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:59.825655    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:59.838555    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:59.838569    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:59.850328    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:59.850338    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:59.862512    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:59.862526    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:59.874722    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:59.874736    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:59.889670    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:59.889681    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:59.903701    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:59.903716    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:59.928522    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:59.928537    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:59.943431    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:59.943446    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:59.965639    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:59.965654    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:59.969970    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:59.969977    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:02:00.006777    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:02:00.006786    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:02:00.021632    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:02:00.021646    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:02:00.039253    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:02:00.039264    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:02:00.050861    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:02:00.050876    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:02:00.067124    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:02:00.067136    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:02:02.607071    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:07.607400    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:07.607703    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:02:07.636708    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:02:07.636810    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:02:07.650233    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:02:07.650316    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:02:07.662468    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:02:07.662541    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:02:07.672941    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:02:07.673023    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:02:07.683618    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:02:07.683706    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:02:07.694091    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:02:07.694181    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:02:07.704150    4114 logs.go:276] 0 containers: []
	W0926 18:02:07.704162    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:02:07.704228    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:02:07.714572    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:02:07.714590    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:02:07.714595    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:02:07.727989    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:02:07.727999    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:02:07.744769    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:02:07.744784    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:02:07.756223    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:02:07.756232    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:02:07.771362    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:02:07.771377    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:02:07.783205    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:02:07.783215    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:02:07.795438    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:02:07.795454    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:02:07.830472    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:02:07.830480    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:02:07.844767    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:02:07.844778    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:02:07.857250    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:02:07.857262    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:02:07.869440    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:02:07.869451    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:02:07.880634    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:02:07.880643    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:02:07.885345    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:02:07.885355    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:02:07.920265    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:02:07.920278    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:02:07.933116    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:02:07.933128    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:02:07.957558    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:02:07.957567    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:02:07.972128    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:02:07.972138    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:02:10.497467    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:15.499797    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:15.499962    4114 kubeadm.go:597] duration metric: took 4m4.51839775s to restartPrimaryControlPlane
	W0926 18:02:15.500121    4114 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0926 18:02:15.500165    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0926 18:02:16.554566    4114 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.054408083s)
	I0926 18:02:16.554646    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 18:02:16.559474    4114 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 18:02:16.562329    4114 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 18:02:16.565033    4114 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 18:02:16.565040    4114 kubeadm.go:157] found existing configuration files:
	
	I0926 18:02:16.565062    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/admin.conf
	I0926 18:02:16.567509    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 18:02:16.567540    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 18:02:16.569915    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/kubelet.conf
	I0926 18:02:16.572851    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 18:02:16.572885    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 18:02:16.576060    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/controller-manager.conf
	I0926 18:02:16.578882    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 18:02:16.578913    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 18:02:16.581813    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/scheduler.conf
	I0926 18:02:16.584866    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 18:02:16.584892    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 18:02:16.588191    4114 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0926 18:02:16.605009    4114 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0926 18:02:16.605151    4114 kubeadm.go:310] [preflight] Running pre-flight checks
	I0926 18:02:16.660826    4114 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 18:02:16.660879    4114 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 18:02:16.660929    4114 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0926 18:02:16.712964    4114 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 18:02:16.717008    4114 out.go:235]   - Generating certificates and keys ...
	I0926 18:02:16.717044    4114 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0926 18:02:16.717078    4114 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0926 18:02:16.717123    4114 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0926 18:02:16.717156    4114 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0926 18:02:16.717189    4114 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0926 18:02:16.717216    4114 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0926 18:02:16.717248    4114 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0926 18:02:16.717281    4114 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0926 18:02:16.717323    4114 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0926 18:02:16.717378    4114 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0926 18:02:16.717399    4114 kubeadm.go:310] [certs] Using the existing "sa" key
	I0926 18:02:16.717427    4114 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 18:02:16.775923    4114 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 18:02:16.829960    4114 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 18:02:17.066595    4114 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 18:02:17.216810    4114 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 18:02:17.245511    4114 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 18:02:17.245760    4114 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 18:02:17.245888    4114 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0926 18:02:17.334473    4114 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 18:02:17.337503    4114 out.go:235]   - Booting up control plane ...
	I0926 18:02:17.337589    4114 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 18:02:17.337646    4114 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 18:02:17.337709    4114 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 18:02:17.337772    4114 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 18:02:17.338064    4114 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0926 18:02:21.838741    4114 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501433 seconds
	I0926 18:02:21.838807    4114 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 18:02:21.843851    4114 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 18:02:22.356821    4114 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 18:02:22.357082    4114 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-937000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 18:02:22.859809    4114 kubeadm.go:310] [bootstrap-token] Using token: 5ikksf.pbrpxtw98s1hgyjs
	I0926 18:02:22.865766    4114 out.go:235]   - Configuring RBAC rules ...
	I0926 18:02:22.865838    4114 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 18:02:22.865886    4114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 18:02:22.873896    4114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 18:02:22.874679    4114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 18:02:22.875530    4114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 18:02:22.876379    4114 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 18:02:22.880014    4114 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 18:02:23.057922    4114 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0926 18:02:23.263562    4114 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0926 18:02:23.264121    4114 kubeadm.go:310] 
	I0926 18:02:23.264156    4114 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0926 18:02:23.264159    4114 kubeadm.go:310] 
	I0926 18:02:23.264194    4114 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0926 18:02:23.264200    4114 kubeadm.go:310] 
	I0926 18:02:23.264215    4114 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0926 18:02:23.264244    4114 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 18:02:23.264268    4114 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 18:02:23.264271    4114 kubeadm.go:310] 
	I0926 18:02:23.264300    4114 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0926 18:02:23.264371    4114 kubeadm.go:310] 
	I0926 18:02:23.264406    4114 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 18:02:23.264409    4114 kubeadm.go:310] 
	I0926 18:02:23.264451    4114 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0926 18:02:23.264502    4114 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 18:02:23.264610    4114 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 18:02:23.264617    4114 kubeadm.go:310] 
	I0926 18:02:23.264665    4114 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 18:02:23.264764    4114 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0926 18:02:23.264770    4114 kubeadm.go:310] 
	I0926 18:02:23.264825    4114 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5ikksf.pbrpxtw98s1hgyjs \
	I0926 18:02:23.264882    4114 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fda44b3178e2a9a18cad0c3f133cc2773c24b77ff2472c5e9e47121699490a5 \
	I0926 18:02:23.264893    4114 kubeadm.go:310] 	--control-plane 
	I0926 18:02:23.264896    4114 kubeadm.go:310] 
	I0926 18:02:23.264945    4114 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0926 18:02:23.264951    4114 kubeadm.go:310] 
	I0926 18:02:23.264996    4114 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5ikksf.pbrpxtw98s1hgyjs \
	I0926 18:02:23.265052    4114 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fda44b3178e2a9a18cad0c3f133cc2773c24b77ff2472c5e9e47121699490a5 
	I0926 18:02:23.265127    4114 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 18:02:23.265136    4114 cni.go:84] Creating CNI manager for ""
	I0926 18:02:23.265143    4114 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:02:23.270699    4114 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 18:02:23.278734    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 18:02:23.282036    4114 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 18:02:23.287509    4114 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 18:02:23.287574    4114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-937000 minikube.k8s.io/updated_at=2024_09_26T18_02_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=running-upgrade-937000 minikube.k8s.io/primary=true
	I0926 18:02:23.287575    4114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 18:02:23.317258    4114 kubeadm.go:1113] duration metric: took 29.734417ms to wait for elevateKubeSystemPrivileges
	I0926 18:02:23.329440    4114 ops.go:34] apiserver oom_adj: -16
	I0926 18:02:23.329549    4114 kubeadm.go:394] duration metric: took 4m12.365161042s to StartCluster
	I0926 18:02:23.329563    4114 settings.go:142] acquiring lock: {Name:mk68436efc4e8fe170d744b4cebdb7ddef61f64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:02:23.329657    4114 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:02:23.330015    4114 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/kubeconfig: {Name:mk9560fb3377d007cf139de436457ca7aa0f8d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:02:23.330195    4114 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:02:23.330219    4114 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 18:02:23.330261    4114 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-937000"
	I0926 18:02:23.330269    4114 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-937000"
	W0926 18:02:23.330273    4114 addons.go:243] addon storage-provisioner should already be in state true
	I0926 18:02:23.330286    4114 host.go:66] Checking if "running-upgrade-937000" exists ...
	I0926 18:02:23.330289    4114 config.go:182] Loaded profile config "running-upgrade-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:02:23.330305    4114 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-937000"
	I0926 18:02:23.330337    4114 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-937000"
	I0926 18:02:23.331208    4114 kapi.go:59] client config for running-upgrade-937000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106156570), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 18:02:23.331332    4114 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-937000"
	W0926 18:02:23.331337    4114 addons.go:243] addon default-storageclass should already be in state true
	I0926 18:02:23.331344    4114 host.go:66] Checking if "running-upgrade-937000" exists ...
	I0926 18:02:23.334660    4114 out.go:177] * Verifying Kubernetes components...
	I0926 18:02:23.335024    4114 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 18:02:23.338723    4114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 18:02:23.338730    4114 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/running-upgrade-937000/id_rsa Username:docker}
	I0926 18:02:23.342665    4114 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:02:23.346749    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:02:23.350659    4114 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 18:02:23.350666    4114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 18:02:23.350672    4114 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/running-upgrade-937000/id_rsa Username:docker}
	I0926 18:02:23.437211    4114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 18:02:23.442704    4114 api_server.go:52] waiting for apiserver process to appear ...
	I0926 18:02:23.442748    4114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:02:23.446509    4114 api_server.go:72] duration metric: took 116.306833ms to wait for apiserver process to appear ...
	I0926 18:02:23.446517    4114 api_server.go:88] waiting for apiserver healthz status ...
	I0926 18:02:23.446524    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:23.473940    4114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 18:02:23.511286    4114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 18:02:23.816518    4114 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 18:02:23.816531    4114 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 18:02:28.447684    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:28.447732    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:33.448368    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:33.448415    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:38.448563    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:38.448602    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:43.449163    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:43.449185    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:48.449572    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:48.449628    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:53.450201    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:53.450260    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0926 18:02:53.816204    4114 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0926 18:02:53.820441    4114 out.go:177] * Enabled addons: storage-provisioner
	I0926 18:02:53.828333    4114 addons.go:510] duration metric: took 30.498965375s for enable addons: enabled=[storage-provisioner]
	I0926 18:02:58.451027    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:58.451067    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:03.325213    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:03.325246    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:08.326384    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:08.326426    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:13.327964    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:13.327992    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:18.329919    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:18.329958    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:23.331893    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:23.332067    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:23.342642    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:03:23.342727    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:23.354865    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:03:23.354951    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:23.365616    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:03:23.365688    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:23.375776    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:03:23.375860    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:23.386137    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:03:23.386216    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:23.396249    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:03:23.396332    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:23.405751    4114 logs.go:276] 0 containers: []
	W0926 18:03:23.405763    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:23.405832    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:23.419911    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:03:23.419926    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:23.419931    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:23.424230    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:23.424240    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:23.464027    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:03:23.464041    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:03:23.478617    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:03:23.478627    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:03:23.490325    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:03:23.490336    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:03:23.508251    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:03:23.508262    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:03:23.520706    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:23.520715    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:23.546004    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:03:23.546021    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:23.558340    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:23.558353    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:23.594909    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:03:23.594924    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:03:23.610017    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:03:23.610026    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:03:23.622127    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:03:23.622138    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:03:23.637572    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:03:23.637582    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:03:26.157137    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:31.159674    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:31.159870    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:31.172387    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:03:31.172479    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:31.182605    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:03:31.182679    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:31.193238    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:03:31.193317    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:31.203868    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:03:31.203955    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:31.214457    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:03:31.214540    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:31.228008    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:03:31.228094    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:31.237785    4114 logs.go:276] 0 containers: []
	W0926 18:03:31.237798    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:31.237872    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:31.248063    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:03:31.248077    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:31.248083    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:31.282303    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:31.282313    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:31.286769    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:03:31.286778    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:03:31.298188    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:03:31.298199    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:03:31.316817    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:03:31.316831    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:03:31.334247    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:31.334258    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:31.357664    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:03:31.357672    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:31.370184    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:31.370195    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:31.410239    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:03:31.410252    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:03:31.426627    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:03:31.426636    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:03:31.443201    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:03:31.443216    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:03:31.455824    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:03:31.455838    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:03:31.475446    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:03:31.475460    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:03:33.989704    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:38.992125    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:38.992416    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:39.016966    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:03:39.017069    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:39.031730    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:03:39.031824    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:39.044123    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:03:39.044212    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:39.056324    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:03:39.056403    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:39.066842    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:03:39.066928    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:39.076991    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:03:39.077071    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:39.087619    4114 logs.go:276] 0 containers: []
	W0926 18:03:39.087634    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:39.087701    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:39.098229    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:03:39.098247    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:03:39.098253    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:03:39.113128    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:03:39.113137    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:03:39.125032    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:39.125043    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:39.159110    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:39.159119    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:39.163254    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:39.163261    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:39.197724    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:03:39.197736    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:03:39.212282    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:03:39.212293    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:03:39.224329    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:03:39.224340    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:03:39.235884    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:03:39.235895    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:03:39.253153    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:03:39.253163    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:03:39.264895    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:39.264906    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:39.290071    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:03:39.290080    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:39.302196    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:03:39.302209    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:03:41.818629    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:46.818684    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:46.818805    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:46.829554    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:03:46.829633    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:46.840407    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:03:46.840488    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:46.850721    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:03:46.850796    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:46.860979    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:03:46.861051    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:46.871206    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:03:46.871297    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:46.881319    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:03:46.881387    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:46.890839    4114 logs.go:276] 0 containers: []
	W0926 18:03:46.890849    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:46.890915    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:46.901568    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:03:46.901584    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:46.901589    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:46.935706    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:46.935715    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:46.940438    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:46.940444    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:46.975271    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:03:46.975284    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:03:46.995978    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:03:46.995990    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:03:47.007958    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:03:47.007974    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:03:47.022475    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:03:47.022485    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:03:47.040928    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:03:47.040938    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:03:47.059124    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:03:47.059136    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:03:47.071104    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:03:47.071119    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:03:47.083681    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:03:47.083691    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:03:47.100580    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:47.100590    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:47.123805    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:03:47.123811    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:49.637237    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:54.638618    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:54.638917    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:54.662728    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:03:54.662855    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:54.678673    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:03:54.678772    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:54.691289    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:03:54.691376    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:54.702567    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:03:54.702645    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:54.713911    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:03:54.713998    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:54.724433    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:03:54.724522    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:54.734834    4114 logs.go:276] 0 containers: []
	W0926 18:03:54.734849    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:54.734916    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:54.745170    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:03:54.745185    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:03:54.745191    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:03:54.757056    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:54.757068    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:54.781517    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:54.781525    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:54.816697    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:03:54.816713    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:03:54.828283    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:03:54.828300    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:03:54.843062    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:03:54.843079    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:03:54.856650    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:03:54.856661    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:03:54.868435    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:03:54.868445    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:03:54.885725    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:03:54.885738    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:03:54.896754    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:03:54.896767    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:54.908203    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:54.908213    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:54.942806    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:54.942815    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:54.947391    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:03:54.947398    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:03:57.462247    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:02.464819    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:02.465362    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:02.498651    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:02.498804    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:02.517864    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:02.517980    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:02.532439    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:04:02.532534    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:02.544848    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:02.544928    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:02.555401    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:02.555491    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:02.566324    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:02.566403    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:02.576411    4114 logs.go:276] 0 containers: []
	W0926 18:04:02.576446    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:02.576526    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:02.587881    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:02.587896    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:02.587902    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:02.607947    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:02.607960    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:02.626967    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:02.626976    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:02.638916    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:02.638930    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:02.654759    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:02.654770    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:02.674155    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:02.674168    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:02.692117    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:02.692130    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:02.704095    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:02.704106    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:02.737714    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:02.737722    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:02.772748    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:02.772757    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:02.786990    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:02.787000    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:02.812886    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:02.812901    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:02.824698    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:02.824711    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:05.331379    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:10.333416    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:10.333600    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:10.349803    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:10.349914    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:10.366864    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:10.366947    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:10.377419    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:04:10.377493    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:10.387803    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:10.387898    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:10.398385    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:10.398462    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:10.408789    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:10.408863    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:10.418275    4114 logs.go:276] 0 containers: []
	W0926 18:04:10.418287    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:10.418353    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:10.429077    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:10.429092    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:10.429098    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:10.440761    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:10.440770    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:10.466517    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:10.466527    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:10.504361    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:10.504371    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:10.518192    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:10.518202    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:10.531669    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:10.531680    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:10.546638    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:10.546648    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:10.561585    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:10.561594    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:10.573357    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:10.573366    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:10.609375    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:10.609387    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:10.613741    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:10.613748    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:10.625121    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:10.625137    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:10.642782    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:10.642794    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:13.156511    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:18.158493    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:18.158697    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:18.174448    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:18.174540    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:18.186275    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:18.186351    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:18.197293    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:04:18.197374    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:18.207211    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:18.207295    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:18.218090    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:18.218170    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:18.228594    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:18.228673    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:18.238835    4114 logs.go:276] 0 containers: []
	W0926 18:04:18.238848    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:18.238915    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:18.249638    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:18.249653    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:18.249658    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:18.263145    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:18.263154    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:18.282006    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:18.282017    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:18.296980    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:18.296991    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:18.309103    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:18.309114    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:18.333719    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:18.333727    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:18.368556    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:18.368563    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:18.372793    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:18.372802    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:18.386951    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:18.386961    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:18.404520    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:18.404531    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:18.421270    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:18.421281    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:18.456618    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:18.456628    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:18.469231    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:18.469241    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:20.986584    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:25.988739    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:25.988972    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:26.008062    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:26.008169    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:26.022601    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:26.022685    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:26.045857    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:04:26.045947    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:26.060528    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:26.060608    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:26.071455    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:26.071534    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:26.085572    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:26.085659    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:26.098909    4114 logs.go:276] 0 containers: []
	W0926 18:04:26.098923    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:26.098995    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:26.110381    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:26.110396    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:26.110401    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:26.124320    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:26.124332    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:26.141675    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:26.141685    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:26.177261    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:26.177268    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:26.181834    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:26.181843    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:26.221086    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:26.221096    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:26.235404    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:26.235415    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:26.247404    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:26.247415    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:26.271766    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:26.271773    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:26.282847    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:26.282858    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:26.297224    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:26.297234    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:26.312536    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:26.312546    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:26.327518    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:26.327527    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:28.841664    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:33.843462    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:33.843756    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:33.871340    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:33.871471    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:33.888097    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:33.888195    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:33.901228    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:04:33.901315    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:33.918650    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:33.918723    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:33.929054    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:33.929131    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:33.941439    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:33.941516    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:33.955505    4114 logs.go:276] 0 containers: []
	W0926 18:04:33.955516    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:33.955586    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:33.966054    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:33.966069    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:33.966075    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:33.981531    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:33.981546    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:33.999841    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:33.999853    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:34.035483    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:34.035498    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:34.049327    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:34.049340    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:34.065466    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:34.065482    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:34.080278    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:34.080291    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:34.092152    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:34.092166    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:34.103936    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:34.103946    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:34.128727    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:34.128734    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:34.140464    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:34.140474    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:34.175479    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:34.175486    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:34.179665    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:34.179671    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:36.696876    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:41.698854    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:41.699087    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:41.714696    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:41.714793    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:41.726596    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:41.726674    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:41.737634    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:04:41.737717    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:41.748614    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:41.748698    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:41.759387    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:41.759475    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:41.770231    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:41.770312    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:41.782900    4114 logs.go:276] 0 containers: []
	W0926 18:04:41.782912    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:41.782985    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:41.793672    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:41.793690    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:41.793696    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:41.834777    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:04:41.834788    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:04:41.849371    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:41.849381    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:41.863935    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:41.863950    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:41.875989    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:04:41.876000    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:04:41.887556    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:41.887566    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:41.899315    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:41.899327    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:41.922662    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:41.922670    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:41.936834    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:41.936846    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:41.949940    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:41.949952    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:41.987071    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:41.987086    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:41.992044    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:41.992053    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:42.006206    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:42.006218    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:42.025479    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:42.025488    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:42.040599    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:42.040615    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:44.560410    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:49.562926    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:49.563423    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:49.597137    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:49.597303    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:49.615713    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:49.615826    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:49.630741    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:04:49.630842    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:49.643511    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:49.643593    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:49.654264    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:49.654341    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:49.665019    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:49.665104    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:49.675298    4114 logs.go:276] 0 containers: []
	W0926 18:04:49.675309    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:49.675381    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:49.686100    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:49.686116    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:04:49.686122    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:04:49.698047    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:49.698058    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:49.710163    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:49.710173    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:49.733830    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:49.733837    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:49.767542    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:49.767557    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:49.781362    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:49.781372    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:49.799314    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:49.799324    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:49.803847    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:49.803852    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:49.818692    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:04:49.818701    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:04:49.830556    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:49.830569    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:49.842470    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:49.842483    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:49.876357    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:49.876367    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:49.895150    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:49.895160    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:49.906595    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:49.906606    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:49.918537    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:49.918548    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:52.431276    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:57.433590    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:57.434121    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:57.477785    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:57.477948    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:57.499473    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:57.499595    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:57.515285    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:04:57.515378    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:57.527759    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:57.527837    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:57.539213    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:57.539291    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:57.550142    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:57.550232    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:57.561207    4114 logs.go:276] 0 containers: []
	W0926 18:04:57.561218    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:57.561287    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:57.572190    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:57.572210    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:57.572216    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:57.607937    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:57.607945    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:57.633663    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:57.633671    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:57.638220    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:57.638228    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:57.652741    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:04:57.652752    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:04:57.664435    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:57.664447    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:57.676099    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:57.676109    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:57.688363    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:57.688375    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:57.700520    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:57.700530    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:57.714884    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:57.714896    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:57.727117    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:57.727129    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:57.745770    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:57.745780    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:57.757609    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:57.757620    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:57.795226    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:04:57.795236    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:04:57.807230    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:57.807244    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:00.324323    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:05.326352    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:05.326601    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:05.348005    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:05.348144    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:05.362442    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:05.362532    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:05.374737    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:05.374828    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:05.389773    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:05.389853    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:05.400617    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:05.400701    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:05.411160    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:05.411237    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:05.421073    4114 logs.go:276] 0 containers: []
	W0926 18:05:05.421084    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:05.421148    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:05.431273    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:05.431291    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:05.431297    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:05.445594    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:05.445605    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:05.456917    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:05.456927    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:05.473092    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:05.473101    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:05.484490    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:05.484498    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:05.519570    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:05.519581    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:05.531780    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:05.531789    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:05.555238    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:05.555247    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:05.578999    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:05.579008    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:05.583416    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:05.583423    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:05.594978    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:05.594989    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:05.606367    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:05.606378    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:05.623120    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:05.623130    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:05.636948    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:05.636959    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:05.672530    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:05.672541    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:08.191365    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:13.192400    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:13.192542    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:13.208957    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:13.209048    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:13.219933    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:13.220005    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:13.231392    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:13.231481    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:13.242277    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:13.242354    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:13.252635    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:13.252710    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:13.262922    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:13.262996    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:13.274758    4114 logs.go:276] 0 containers: []
	W0926 18:05:13.274772    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:13.274843    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:13.284820    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:13.284839    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:13.284844    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:13.298702    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:13.298712    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:13.309813    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:13.309823    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:13.322752    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:13.322763    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:13.337511    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:13.337524    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:13.372207    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:13.372217    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:13.407003    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:13.407013    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:13.418787    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:13.418800    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:13.423235    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:13.423240    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:13.446750    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:13.446757    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:13.463730    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:13.463741    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:13.481389    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:13.481399    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:13.492570    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:13.492581    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:13.506227    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:13.506237    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:13.518531    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:13.518543    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:16.032559    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:21.033545    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:21.033887    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:21.056311    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:21.056436    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:21.073058    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:21.073155    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:21.088712    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:21.088795    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:21.101538    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:21.101614    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:21.112060    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:21.112128    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:21.122630    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:21.122722    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:21.133795    4114 logs.go:276] 0 containers: []
	W0926 18:05:21.133807    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:21.133873    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:21.144150    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:21.144168    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:21.144173    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:21.155560    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:21.155571    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:21.168602    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:21.168611    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:21.181392    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:21.181403    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:21.197142    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:21.197158    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:21.231802    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:21.231810    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:21.245496    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:21.245509    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:21.262427    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:21.262439    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:21.279351    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:21.279367    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:21.314348    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:21.314358    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:21.326295    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:21.326311    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:21.349912    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:21.349919    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:21.354032    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:21.354042    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:21.368161    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:21.368173    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:21.380788    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:21.380801    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:23.900892    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:28.903423    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:28.903697    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:28.927423    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:28.927562    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:28.943098    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:28.943192    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:28.955714    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:28.955807    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:28.966602    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:28.966684    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:28.981527    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:28.981619    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:28.991744    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:28.991822    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:29.001917    4114 logs.go:276] 0 containers: []
	W0926 18:05:29.001932    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:29.002004    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:29.016438    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:29.016455    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:29.016461    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:29.027856    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:29.027867    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:29.064452    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:29.064461    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:29.076512    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:29.076527    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:29.088147    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:29.088161    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:29.103160    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:29.103169    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:29.115020    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:29.115031    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:29.129905    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:29.129915    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:29.140953    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:29.140964    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:29.153385    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:29.153397    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:29.157886    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:29.157891    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:29.193411    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:29.193423    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:29.208765    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:29.208778    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:29.223714    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:29.223726    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:29.242073    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:29.242083    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:31.767727    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:36.769762    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:36.769986    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:36.787359    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:36.787466    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:36.801599    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:36.801690    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:36.813900    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:36.813994    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:36.824164    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:36.824247    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:36.835146    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:36.835232    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:36.845604    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:36.845680    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:36.864414    4114 logs.go:276] 0 containers: []
	W0926 18:05:36.864427    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:36.864499    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:36.874826    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:36.874845    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:36.874851    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:36.909972    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:36.909981    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:36.923946    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:36.923956    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:36.935540    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:36.935550    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:36.946908    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:36.946920    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:36.964028    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:36.964037    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:36.968550    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:36.968559    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:36.985704    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:36.985717    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:36.997671    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:36.997680    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:37.009303    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:37.009316    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:37.020928    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:37.020938    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:37.036937    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:37.036951    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:37.062146    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:37.062154    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:37.096781    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:37.096792    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:37.109123    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:37.109137    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:39.625435    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:44.627919    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:44.628274    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:44.657028    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:44.657190    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:44.675333    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:44.675436    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:44.691501    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:44.691594    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:44.703094    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:44.703175    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:44.715065    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:44.715138    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:44.725567    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:44.725636    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:44.736025    4114 logs.go:276] 0 containers: []
	W0926 18:05:44.736038    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:44.736105    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:44.746751    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:44.746768    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:44.746774    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:44.758562    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:44.758572    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:44.770299    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:44.770309    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:44.781868    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:44.781878    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:44.793435    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:44.793449    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:44.818933    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:44.818951    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:44.854543    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:44.854554    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:44.869833    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:44.869849    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:44.881985    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:44.881997    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:44.918479    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:44.918493    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:44.933437    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:44.933452    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:44.948632    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:44.948646    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:44.968209    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:44.968226    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:44.972615    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:44.972623    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:44.991635    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:44.991651    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:47.504334    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:52.506273    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:52.506390    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:52.520318    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:52.520400    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:52.531159    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:52.531254    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:52.543594    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:52.543682    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:52.555016    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:52.555107    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:52.565612    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:52.565696    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:52.577147    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:52.577223    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:52.588371    4114 logs.go:276] 0 containers: []
	W0926 18:05:52.588383    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:52.588463    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:52.600312    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:52.600329    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:52.600336    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:52.604875    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:52.604884    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:52.622991    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:52.623004    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:52.640065    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:52.640075    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:52.665353    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:52.665363    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:52.700676    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:52.700690    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:52.737929    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:52.737941    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:52.755831    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:52.755843    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:52.775177    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:52.775186    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:52.788275    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:52.788286    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:52.803113    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:52.803125    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:52.817782    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:52.817795    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:52.831022    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:52.831034    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:52.844208    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:52.844224    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:52.859777    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:52.859788    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:55.384542    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:00.386550    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:00.386656    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:06:00.397882    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:06:00.397975    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:06:00.408419    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:06:00.408500    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:06:00.419173    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:06:00.419260    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:06:00.429594    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:06:00.429672    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:06:00.440670    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:06:00.440754    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:06:00.451317    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:06:00.451391    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:06:00.464335    4114 logs.go:276] 0 containers: []
	W0926 18:06:00.464346    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:06:00.464412    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:06:00.474971    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:06:00.474989    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:06:00.474995    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:06:00.513185    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:06:00.513205    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:06:00.528938    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:06:00.528949    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:06:00.542968    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:06:00.542978    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:06:00.554164    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:06:00.554174    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:06:00.566085    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:06:00.566095    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:06:00.577813    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:06:00.577829    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:06:00.582211    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:06:00.582219    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:06:00.617271    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:06:00.617286    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:06:00.628773    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:06:00.628785    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:06:00.640424    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:06:00.640434    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:06:00.659547    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:06:00.659556    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:06:00.683752    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:06:00.683761    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:06:00.699120    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:06:00.699132    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:06:00.716176    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:06:00.716187    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:06:03.229507    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:08.231501    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:08.231636    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:06:08.244820    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:06:08.244911    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:06:08.260574    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:06:08.260652    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:06:08.271310    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:06:08.271379    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:06:08.281877    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:06:08.281965    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:06:08.293681    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:06:08.293765    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:06:08.305050    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:06:08.305135    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:06:08.315466    4114 logs.go:276] 0 containers: []
	W0926 18:06:08.315479    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:06:08.315553    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:06:08.325618    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:06:08.325637    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:06:08.325642    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:06:08.361496    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:06:08.361505    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:06:08.365914    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:06:08.365920    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:06:08.377689    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:06:08.377703    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:06:08.402682    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:06:08.402695    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:06:08.415634    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:06:08.415646    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:06:08.433615    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:06:08.433627    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:06:08.445406    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:06:08.445417    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:06:08.459544    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:06:08.459554    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:06:08.471332    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:06:08.471344    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:06:08.506531    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:06:08.506543    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:06:08.526015    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:06:08.526027    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:06:08.539012    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:06:08.539023    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:06:08.553888    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:06:08.553899    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:06:08.566014    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:06:08.566024    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:06:11.085546    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:16.087660    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:16.087874    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:06:16.105326    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:06:16.105424    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:06:16.118341    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:06:16.118424    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:06:16.129927    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:06:16.129998    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:06:16.144769    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:06:16.144845    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:06:16.155452    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:06:16.155530    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:06:16.166380    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:06:16.166458    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:06:16.176466    4114 logs.go:276] 0 containers: []
	W0926 18:06:16.176476    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:06:16.176540    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:06:16.186759    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:06:16.186774    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:06:16.186780    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:06:16.191248    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:06:16.191257    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:06:16.203079    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:06:16.203090    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:06:16.226074    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:06:16.226081    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:06:16.244063    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:06:16.244072    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:06:16.255783    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:06:16.255794    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:06:16.267621    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:06:16.267632    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:06:16.282628    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:06:16.282638    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:06:16.317225    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:06:16.317240    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:06:16.329093    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:06:16.329103    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:06:16.341627    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:06:16.341636    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:06:16.352821    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:06:16.352832    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:06:16.387521    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:06:16.387528    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:06:16.401682    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:06:16.401698    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:06:16.419452    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:06:16.419463    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:06:18.938669    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:23.940048    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:23.944711    4114 out.go:201] 
	W0926 18:06:23.947524    4114 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0926 18:06:23.947535    4114 out.go:270] * 
	* 
	W0926 18:06:23.948126    4114 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:06:23.959515    4114 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-937000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-26 18:06:24.069394 -0700 PDT m=+3153.895333251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-937000 -n running-upgrade-937000
E0926 18:06:34.925485    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-937000 -n running-upgrade-937000: exit status 2 (15.559082209s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-937000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-879000          | force-systemd-flag-879000 | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-796000              | force-systemd-env-796000  | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-796000           | force-systemd-env-796000  | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT | 26 Sep 24 17:56 PDT |
	| start   | -p docker-flags-485000                | docker-flags-485000       | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-879000             | force-systemd-flag-879000 | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-879000          | force-systemd-flag-879000 | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT | 26 Sep 24 17:56 PDT |
	| start   | -p cert-expiration-671000             | cert-expiration-671000    | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-485000 ssh               | docker-flags-485000       | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-485000 ssh               | docker-flags-485000       | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-485000                | docker-flags-485000       | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT | 26 Sep 24 17:56 PDT |
	| start   | -p cert-options-759000                | cert-options-759000       | jenkins | v1.34.0 | 26 Sep 24 17:56 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-759000 ssh               | cert-options-759000       | jenkins | v1.34.0 | 26 Sep 24 17:57 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-759000 -- sudo        | cert-options-759000       | jenkins | v1.34.0 | 26 Sep 24 17:57 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-759000                | cert-options-759000       | jenkins | v1.34.0 | 26 Sep 24 17:57 PDT | 26 Sep 24 17:57 PDT |
	| start   | -p running-upgrade-937000             | minikube                  | jenkins | v1.26.0 | 26 Sep 24 17:57 PDT | 26 Sep 24 17:57 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-937000             | running-upgrade-937000    | jenkins | v1.34.0 | 26 Sep 24 17:57 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-671000             | cert-expiration-671000    | jenkins | v1.34.0 | 26 Sep 24 17:59 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-671000             | cert-expiration-671000    | jenkins | v1.34.0 | 26 Sep 24 18:00 PDT | 26 Sep 24 18:00 PDT |
	| start   | -p kubernetes-upgrade-708000          | kubernetes-upgrade-708000 | jenkins | v1.34.0 | 26 Sep 24 18:00 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-708000          | kubernetes-upgrade-708000 | jenkins | v1.34.0 | 26 Sep 24 18:00 PDT | 26 Sep 24 18:00 PDT |
	| start   | -p kubernetes-upgrade-708000          | kubernetes-upgrade-708000 | jenkins | v1.34.0 | 26 Sep 24 18:00 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-708000          | kubernetes-upgrade-708000 | jenkins | v1.34.0 | 26 Sep 24 18:00 PDT | 26 Sep 24 18:00 PDT |
	| start   | -p stopped-upgrade-211000             | minikube                  | jenkins | v1.26.0 | 26 Sep 24 18:00 PDT | 26 Sep 24 18:01 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-211000 stop           | minikube                  | jenkins | v1.26.0 | 26 Sep 24 18:01 PDT | 26 Sep 24 18:01 PDT |
	| start   | -p stopped-upgrade-211000             | stopped-upgrade-211000    | jenkins | v1.34.0 | 26 Sep 24 18:01 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 18:01:13
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 18:01:13.172483    4572 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:01:13.173007    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:01:13.173021    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:01:13.173028    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:01:13.173595    4572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:01:13.175076    4572 out.go:352] Setting JSON to false
	I0926 18:01:13.193988    4572 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3636,"bootTime":1727395237,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:01:13.194084    4572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:01:13.198925    4572 out.go:177] * [stopped-upgrade-211000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:01:13.205977    4572 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:01:13.206024    4572 notify.go:220] Checking for updates...
	I0926 18:01:13.212931    4572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:01:13.215893    4572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:01:13.219989    4572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:01:13.222981    4572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:01:13.225931    4572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:01:13.229245    4572 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:01:13.232909    4572 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0926 18:01:13.235935    4572 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:01:13.239952    4572 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 18:01:13.247885    4572 start.go:297] selected driver: qemu2
	I0926 18:01:13.247890    4572 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0926 18:01:13.247940    4572 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:01:13.250402    4572 cni.go:84] Creating CNI manager for ""
	I0926 18:01:13.250431    4572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:01:13.250448    4572 start.go:340] cluster config:
	{Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0926 18:01:13.250499    4572 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:01:13.257931    4572 out.go:177] * Starting "stopped-upgrade-211000" primary control-plane node in "stopped-upgrade-211000" cluster
	I0926 18:01:13.261846    4572 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0926 18:01:13.261861    4572 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0926 18:01:13.261867    4572 cache.go:56] Caching tarball of preloaded images
	I0926 18:01:13.261931    4572 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:01:13.261945    4572 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0926 18:01:13.261996    4572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/config.json ...
	I0926 18:01:13.262470    4572 start.go:360] acquireMachinesLock for stopped-upgrade-211000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:01:13.262512    4572 start.go:364] duration metric: took 34.542µs to acquireMachinesLock for "stopped-upgrade-211000"
	I0926 18:01:13.262520    4572 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:01:13.262525    4572 fix.go:54] fixHost starting: 
	I0926 18:01:13.262625    4572 fix.go:112] recreateIfNeeded on stopped-upgrade-211000: state=Stopped err=<nil>
	W0926 18:01:13.262634    4572 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:01:13.265940    4572 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-211000" ...
	I0926 18:01:12.213297    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:12.213500    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:12.227505    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:12.227602    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:12.239200    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:12.239273    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:12.249790    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:12.249868    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:12.260327    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:12.260399    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:12.276076    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:12.276166    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:12.286348    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:12.286428    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:12.296989    4114 logs.go:276] 0 containers: []
	W0926 18:01:12.297001    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:12.297072    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:12.307890    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:12.307909    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:12.307914    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:12.322480    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:12.322493    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:12.339796    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:12.339813    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:12.352117    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:12.352133    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:12.364023    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:12.364037    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:12.389038    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:12.389056    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:12.426205    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:12.426215    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:12.430981    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:12.430990    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:12.465538    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:12.465549    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:12.492543    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:12.492554    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:12.507927    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:12.507938    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:12.524012    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:12.524024    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:12.542169    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:12.542181    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:12.556805    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:12.556819    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:12.571145    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:12.571156    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:12.585932    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:12.585950    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:12.597217    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:12.597227    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:13.273947    4572 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:01:13.274066    4572 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50504-:22,hostfwd=tcp::50505-:2376,hostname=stopped-upgrade-211000 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/disk.qcow2
	I0926 18:01:13.318181    4572 main.go:141] libmachine: STDOUT: 
	I0926 18:01:13.318205    4572 main.go:141] libmachine: STDERR: 
	I0926 18:01:13.318213    4572 main.go:141] libmachine: Waiting for VM to start (ssh -p 50504 docker@127.0.0.1)...
	I0926 18:01:15.110447    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:20.112959    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:20.113094    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:20.124355    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:20.124446    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:20.135575    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:20.135662    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:20.146576    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:20.146655    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:20.157681    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:20.157764    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:20.168475    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:20.168556    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:20.179116    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:20.179193    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:20.189237    4114 logs.go:276] 0 containers: []
	W0926 18:01:20.189248    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:20.189315    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:20.199960    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:20.199978    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:20.199984    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:20.211250    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:20.211261    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:20.223252    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:20.223262    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:20.246841    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:20.246850    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:20.258818    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:20.258829    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:20.275346    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:20.275356    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:20.286861    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:20.286871    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:20.322452    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:20.322462    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:20.334589    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:20.334601    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:20.349353    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:20.349368    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:20.354306    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:20.354312    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:20.383185    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:20.383196    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:20.397132    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:20.397142    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:20.413499    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:20.413511    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:20.428991    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:20.429001    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:20.446428    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:20.446438    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:20.457805    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:20.457815    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:22.997836    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:28.000016    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:28.000210    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:28.012554    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:28.012647    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:28.023797    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:28.023876    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:28.034159    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:28.034244    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:28.046513    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:28.046605    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:28.062718    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:28.062802    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:28.073647    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:28.073730    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:28.091164    4114 logs.go:276] 0 containers: []
	W0926 18:01:28.091176    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:28.091245    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:28.111888    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:28.111907    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:28.111912    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:28.150626    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:28.150634    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:28.166137    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:28.166149    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:28.178691    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:28.178702    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:28.199260    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:28.199270    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:28.210844    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:28.210854    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:28.234067    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:28.234075    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:28.249950    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:28.249964    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:28.267887    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:28.267900    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:28.281012    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:28.281024    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:28.292782    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:28.292795    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:28.304805    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:28.304818    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:28.318227    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:28.318239    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:28.346360    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:28.346375    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:28.361521    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:28.361536    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:28.374710    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:28.374725    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:28.379133    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:28.379140    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:30.916076    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:33.869433    4572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/config.json ...
	I0926 18:01:33.870192    4572 machine.go:93] provisionDockerMachine start ...
	I0926 18:01:33.870353    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:33.870829    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:33.870843    4572 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 18:01:33.956494    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 18:01:33.956523    4572 buildroot.go:166] provisioning hostname "stopped-upgrade-211000"
	I0926 18:01:33.956663    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:33.956891    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:33.956903    4572 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-211000 && echo "stopped-upgrade-211000" | sudo tee /etc/hostname
	I0926 18:01:34.038777    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-211000
	
	I0926 18:01:34.038880    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:34.039091    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:34.039109    4572 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-211000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-211000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-211000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 18:01:34.110647    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 18:01:34.110662    4572 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1075/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1075/.minikube}
	I0926 18:01:34.110671    4572 buildroot.go:174] setting up certificates
	I0926 18:01:34.110676    4572 provision.go:84] configureAuth start
	I0926 18:01:34.110684    4572 provision.go:143] copyHostCerts
	I0926 18:01:34.110769    4572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.pem, removing ...
	I0926 18:01:34.110777    4572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.pem
	I0926 18:01:34.110886    4572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.pem (1078 bytes)
	I0926 18:01:34.111074    4572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1075/.minikube/cert.pem, removing ...
	I0926 18:01:34.111079    4572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1075/.minikube/cert.pem
	I0926 18:01:34.111137    4572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/cert.pem (1123 bytes)
	I0926 18:01:34.111255    4572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1075/.minikube/key.pem, removing ...
	I0926 18:01:34.111259    4572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1075/.minikube/key.pem
	I0926 18:01:34.111310    4572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/key.pem (1679 bytes)
	I0926 18:01:34.111400    4572 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-211000 san=[127.0.0.1 localhost minikube stopped-upgrade-211000]
	I0926 18:01:34.360517    4572 provision.go:177] copyRemoteCerts
	I0926 18:01:34.360589    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 18:01:34.360601    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0926 18:01:34.396643    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0926 18:01:34.403243    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0926 18:01:34.409917    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 18:01:34.416906    4572 provision.go:87] duration metric: took 306.229542ms to configureAuth
	I0926 18:01:34.416915    4572 buildroot.go:189] setting minikube options for container-runtime
	I0926 18:01:34.417010    4572 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:01:34.417056    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:34.417141    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:34.417146    4572 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 18:01:34.483057    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 18:01:34.483067    4572 buildroot.go:70] root file system type: tmpfs
	I0926 18:01:34.483121    4572 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 18:01:34.483167    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:34.483283    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:34.483316    4572 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 18:01:34.552202    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 18:01:34.552273    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:34.552386    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:34.552395    4572 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 18:01:34.919340    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 18:01:34.919353    4572 machine.go:96] duration metric: took 1.049180708s to provisionDockerMachine
	I0926 18:01:34.919365    4572 start.go:293] postStartSetup for "stopped-upgrade-211000" (driver="qemu2")
	I0926 18:01:34.919371    4572 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 18:01:34.919437    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 18:01:34.919446    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0926 18:01:34.957997    4572 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 18:01:34.959360    4572 info.go:137] Remote host: Buildroot 2021.02.12
	I0926 18:01:34.959369    4572 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1075/.minikube/addons for local assets ...
	I0926 18:01:34.959462    4572 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1075/.minikube/files for local assets ...
	I0926 18:01:34.959588    4572 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem -> 15972.pem in /etc/ssl/certs
	I0926 18:01:34.959723    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 18:01:34.962654    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem --> /etc/ssl/certs/15972.pem (1708 bytes)
	I0926 18:01:34.970747    4572 start.go:296] duration metric: took 51.376666ms for postStartSetup
	I0926 18:01:34.970768    4572 fix.go:56] duration metric: took 21.708849208s for fixHost
	I0926 18:01:34.970817    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:34.970939    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:34.970947    4572 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 18:01:35.034458    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398894.944262754
	
	I0926 18:01:35.034467    4572 fix.go:216] guest clock: 1727398894.944262754
	I0926 18:01:35.034472    4572 fix.go:229] Guest: 2024-09-26 18:01:34.944262754 -0700 PDT Remote: 2024-09-26 18:01:34.97077 -0700 PDT m=+21.828480918 (delta=-26.507246ms)
	I0926 18:01:35.034483    4572 fix.go:200] guest clock delta is within tolerance: -26.507246ms
	I0926 18:01:35.034486    4572 start.go:83] releasing machines lock for "stopped-upgrade-211000", held for 21.772578042s
	I0926 18:01:35.034556    4572 ssh_runner.go:195] Run: cat /version.json
	I0926 18:01:35.034565    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0926 18:01:35.034568    4572 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 18:01:35.034618    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	W0926 18:01:35.035171    4572 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50647->127.0.0.1:50504: read: connection reset by peer
	I0926 18:01:35.035187    4572 retry.go:31] will retry after 258.15249ms: ssh: handshake failed: read tcp 127.0.0.1:50647->127.0.0.1:50504: read: connection reset by peer
	W0926 18:01:35.066788    4572 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0926 18:01:35.066844    4572 ssh_runner.go:195] Run: systemctl --version
	I0926 18:01:35.068634    4572 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 18:01:35.070229    4572 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 18:01:35.070260    4572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0926 18:01:35.073543    4572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0926 18:01:35.078840    4572 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 18:01:35.078849    4572 start.go:495] detecting cgroup driver to use...
	I0926 18:01:35.078927    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:01:35.087146    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0926 18:01:35.090131    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 18:01:35.093577    4572 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 18:01:35.093603    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 18:01:35.097233    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:01:35.100997    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 18:01:35.104186    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:01:35.107000    4572 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 18:01:35.109872    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 18:01:35.113293    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 18:01:35.116792    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 18:01:35.120082    4572 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 18:01:35.122616    4572 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 18:01:35.125689    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:35.196788    4572 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 18:01:35.203508    4572 start.go:495] detecting cgroup driver to use...
	I0926 18:01:35.203589    4572 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 18:01:35.208712    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:01:35.213614    4572 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 18:01:35.223128    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:01:35.227755    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:01:35.232014    4572 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 18:01:35.272387    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:01:35.277358    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:01:35.282924    4572 ssh_runner.go:195] Run: which cri-dockerd
	I0926 18:01:35.284152    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 18:01:35.286640    4572 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0926 18:01:35.291570    4572 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 18:01:35.372366    4572 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 18:01:35.447166    4572 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 18:01:35.447226    4572 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 18:01:35.452826    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:35.524626    4572 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 18:01:36.638958    4572 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.11434775s)
	I0926 18:01:36.639027    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 18:01:36.643525    4572 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 18:01:36.649496    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 18:01:36.653895    4572 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 18:01:36.732564    4572 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 18:01:36.813653    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:36.893550    4572 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 18:01:36.899407    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 18:01:36.903551    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:36.983964    4572 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 18:01:37.021824    4572 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 18:01:37.021911    4572 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 18:01:37.023911    4572 start.go:563] Will wait 60s for crictl version
	I0926 18:01:37.023967    4572 ssh_runner.go:195] Run: which crictl
	I0926 18:01:37.025469    4572 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 18:01:37.039876    4572 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0926 18:01:37.039949    4572 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 18:01:37.056116    4572 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 18:01:37.077725    4572 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0926 18:01:37.077809    4572 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0926 18:01:37.079082    4572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 18:01:37.082620    4572 kubeadm.go:883] updating cluster {Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0926 18:01:37.082662    4572 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0926 18:01:37.082719    4572 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 18:01:37.095629    4572 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 18:01:37.095637    4572 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0926 18:01:37.095686    4572 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0926 18:01:37.098750    4572 ssh_runner.go:195] Run: which lz4
	I0926 18:01:37.100092    4572 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 18:01:37.101319    4572 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 18:01:37.101330    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0926 18:01:38.102998    4572 docker.go:649] duration metric: took 1.002984333s to copy over tarball
	I0926 18:01:38.103077    4572 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 18:01:35.918755    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:35.919126    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:35.953025    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:35.953194    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:35.973173    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:35.973273    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:35.987813    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:35.987903    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:36.000434    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:36.000519    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:36.010971    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:36.011038    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:36.021411    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:36.021488    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:36.032365    4114 logs.go:276] 0 containers: []
	W0926 18:01:36.032377    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:36.032445    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:36.047341    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:36.047357    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:36.047363    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:36.087793    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:36.087803    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:36.118218    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:36.118233    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:36.147607    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:36.147621    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:36.167506    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:36.167520    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:36.178471    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:36.178483    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:36.201912    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:36.201921    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:36.237659    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:36.237672    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:36.242310    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:36.242317    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:36.267598    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:36.267609    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:36.280323    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:36.280336    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:36.291395    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:36.291407    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:36.305136    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:36.305150    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:36.316559    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:36.316568    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:36.328814    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:36.328827    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:36.344052    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:36.344062    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:36.362980    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:36.362990    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:38.875857    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:39.252756    4572 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.14969725s)
	I0926 18:01:39.252769    4572 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 18:01:39.268275    4572 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0926 18:01:39.271576    4572 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0926 18:01:39.276715    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:39.355927    4572 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 18:01:40.839843    4572 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.483941959s)
	I0926 18:01:40.839971    4572 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 18:01:40.851336    4572 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 18:01:40.851344    4572 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0926 18:01:40.851349    4572 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0926 18:01:40.856383    4572 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:01:40.858546    4572 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0926 18:01:40.860828    4572 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0926 18:01:40.861056    4572 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:01:40.862725    4572 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 18:01:40.862747    4572 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0926 18:01:40.864158    4572 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0926 18:01:40.864177    4572 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0926 18:01:40.865485    4572 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0926 18:01:40.865561    4572 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 18:01:40.866830    4572 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0926 18:01:40.867007    4572 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0926 18:01:40.868220    4572 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0926 18:01:40.868314    4572 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0926 18:01:40.869230    4572 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0926 18:01:40.869826    4572 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0926 18:01:41.299109    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0926 18:01:41.309760    4572 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0926 18:01:41.309790    4572 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0926 18:01:41.309858    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0926 18:01:41.319127    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0926 18:01:41.320320    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0926 18:01:41.320629    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0926 18:01:41.329584    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 18:01:41.331374    4572 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0926 18:01:41.331391    4572 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0926 18:01:41.331401    4572 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0926 18:01:41.331392    4572 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0926 18:01:41.331453    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0926 18:01:41.331499    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0926 18:01:41.342582    4572 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0926 18:01:41.342603    4572 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 18:01:41.342672    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	W0926 18:01:41.354509    4572 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0926 18:01:41.354658    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0926 18:01:41.355344    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0926 18:01:41.355370    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0926 18:01:41.362673    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0926 18:01:41.367837    4572 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0926 18:01:41.367856    4572 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0926 18:01:41.367920    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0926 18:01:41.376632    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0926 18:01:41.384714    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0926 18:01:41.384850    4572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0926 18:01:41.387675    4572 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0926 18:01:41.387689    4572 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0926 18:01:41.387696    4572 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0926 18:01:41.387707    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0926 18:01:41.387747    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0926 18:01:41.394086    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0926 18:01:41.426457    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0926 18:01:41.426579    4572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0926 18:01:41.426809    4572 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0926 18:01:41.426826    4572 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0926 18:01:41.426868    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0926 18:01:41.437990    4572 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0926 18:01:41.438018    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0926 18:01:41.441449    4572 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0926 18:01:41.441459    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0926 18:01:41.458079    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0926 18:01:41.458210    4572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0926 18:01:41.485094    4572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0926 18:01:41.485116    4572 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0926 18:01:41.485122    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0926 18:01:41.485134    4572 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0926 18:01:41.485155    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0926 18:01:41.523003    4572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0926 18:01:41.706255    4572 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0926 18:01:41.706278    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0926 18:01:41.826890    4572 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0926 18:01:41.827009    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:01:41.844017    4572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0926 18:01:41.844347    4572 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0926 18:01:41.844371    4572 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:01:41.844446    4572 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:01:41.857521    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0926 18:01:41.857651    4572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0926 18:01:41.859157    4572 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0926 18:01:41.859169    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0926 18:01:41.887809    4572 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0926 18:01:41.887824    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0926 18:01:42.119705    4572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0926 18:01:42.119753    4572 cache_images.go:92] duration metric: took 1.268431292s to LoadCachedImages
	W0926 18:01:42.119803    4572 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0926 18:01:42.119809    4572 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0926 18:01:42.119855    4572 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-211000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 18:01:42.119942    4572 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 18:01:42.133154    4572 cni.go:84] Creating CNI manager for ""
	I0926 18:01:42.133166    4572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:01:42.133171    4572 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 18:01:42.133179    4572 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-211000 NodeName:stopped-upgrade-211000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 18:01:42.133244    4572 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-211000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 18:01:42.133301    4572 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0926 18:01:42.136973    4572 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 18:01:42.137020    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 18:01:42.139780    4572 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0926 18:01:42.144409    4572 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 18:01:42.149449    4572 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0926 18:01:42.154960    4572 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0926 18:01:42.156003    4572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 18:01:42.159664    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:42.236386    4572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 18:01:42.241716    4572 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000 for IP: 10.0.2.15
	I0926 18:01:42.241726    4572 certs.go:194] generating shared ca certs ...
	I0926 18:01:42.241736    4572 certs.go:226] acquiring lock for ca certs: {Name:mk27a718ead98149a4ca4d0cc52012d8aa60b9f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:01:42.241903    4572 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.key
	I0926 18:01:42.241958    4572 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.key
	I0926 18:01:42.241965    4572 certs.go:256] generating profile certs ...
	I0926 18:01:42.242040    4572 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/client.key
	I0926 18:01:42.242056    4572 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c
	I0926 18:01:42.242064    4572 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0926 18:01:42.351424    4572 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c ...
	I0926 18:01:42.351440    4572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c: {Name:mkdb72198780a42d20f224a6157ee1d5d04fb741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:01:42.351770    4572 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c ...
	I0926 18:01:42.351778    4572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c: {Name:mk7cd4a50e2508f8f479fffc7d9c3adfbafa760a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:01:42.351913    4572 certs.go:381] copying /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c -> /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt
	I0926 18:01:42.352064    4572 certs.go:385] copying /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c -> /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key
	I0926 18:01:42.352232    4572 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/proxy-client.key
	I0926 18:01:42.352374    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/1597.pem (1338 bytes)
	W0926 18:01:42.352408    4572 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/1597_empty.pem, impossibly tiny 0 bytes
	I0926 18:01:42.352414    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 18:01:42.352438    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem (1078 bytes)
	I0926 18:01:42.352455    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem (1123 bytes)
	I0926 18:01:42.352476    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem (1679 bytes)
	I0926 18:01:42.352512    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem (1708 bytes)
	I0926 18:01:42.352828    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 18:01:42.360169    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 18:01:42.367090    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 18:01:42.373721    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 18:01:42.381170    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0926 18:01:42.388437    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 18:01:42.395627    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 18:01:42.402603    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 18:01:42.409702    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 18:01:42.416872    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/1597.pem --> /usr/share/ca-certificates/1597.pem (1338 bytes)
	I0926 18:01:42.423983    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem --> /usr/share/ca-certificates/15972.pem (1708 bytes)
	I0926 18:01:42.430667    4572 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 18:01:42.435646    4572 ssh_runner.go:195] Run: openssl version
	I0926 18:01:42.437482    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15972.pem && ln -fs /usr/share/ca-certificates/15972.pem /etc/ssl/certs/15972.pem"
	I0926 18:01:42.440873    4572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15972.pem
	I0926 18:01:42.442393    4572 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:28 /usr/share/ca-certificates/15972.pem
	I0926 18:01:42.442419    4572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15972.pem
	I0926 18:01:42.444269    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15972.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 18:01:42.447231    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 18:01:42.450130    4572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:01:42.451523    4572 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:01:42.451554    4572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:01:42.453170    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 18:01:42.456255    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1597.pem && ln -fs /usr/share/ca-certificates/1597.pem /etc/ssl/certs/1597.pem"
	I0926 18:01:42.459356    4572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1597.pem
	I0926 18:01:42.460708    4572 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:28 /usr/share/ca-certificates/1597.pem
	I0926 18:01:42.460731    4572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1597.pem
	I0926 18:01:42.462544    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1597.pem /etc/ssl/certs/51391683.0"
	I0926 18:01:42.466809    4572 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 18:01:42.468191    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 18:01:42.469960    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 18:01:42.471675    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 18:01:42.473497    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 18:01:42.475239    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 18:01:42.476856    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 18:01:42.478723    4572 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0926 18:01:42.478791    4572 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 18:01:42.489085    4572 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 18:01:42.492560    4572 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0926 18:01:42.492570    4572 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0926 18:01:42.492604    4572 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 18:01:42.495478    4572 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 18:01:42.495806    4572 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-211000" does not appear in /Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:01:42.495896    4572 kubeconfig.go:62] /Users/jenkins/minikube-integration/19711-1075/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-211000" cluster setting kubeconfig missing "stopped-upgrade-211000" context setting]
	I0926 18:01:42.496086    4572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/kubeconfig: {Name:mk9560fb3377d007cf139de436457ca7aa0f8d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:01:42.496514    4572 kapi.go:59] client config for stopped-upgrade-211000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060ce570), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 18:01:42.496846    4572 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 18:01:42.499376    4572 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-211000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0926 18:01:42.499381    4572 kubeadm.go:1160] stopping kube-system containers ...
	I0926 18:01:42.499440    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 18:01:42.510072    4572 docker.go:483] Stopping containers: [240fdc9989e4 6389d9bb1ecd aaaef996b4e8 6707ec992f36 1b1da32ebdf8 cbdda73835f3 0be1021df9b4 ec810a93628b]
	I0926 18:01:42.510162    4572 ssh_runner.go:195] Run: docker stop 240fdc9989e4 6389d9bb1ecd aaaef996b4e8 6707ec992f36 1b1da32ebdf8 cbdda73835f3 0be1021df9b4 ec810a93628b
	I0926 18:01:42.520433    4572 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0926 18:01:42.525965    4572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 18:01:42.529237    4572 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 18:01:42.529249    4572 kubeadm.go:157] found existing configuration files:
	
	I0926 18:01:42.529277    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0926 18:01:42.532270    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 18:01:42.532294    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 18:01:42.534897    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0926 18:01:42.537569    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 18:01:42.537599    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 18:01:42.540505    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0926 18:01:42.542924    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 18:01:42.542947    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 18:01:42.545653    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0926 18:01:42.548590    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 18:01:42.548614    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 18:01:42.551171    4572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 18:01:42.553920    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:01:42.577058    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:01:42.874306    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:01:43.013990    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:01:43.046216    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:01:43.072410    4572 api_server.go:52] waiting for apiserver process to appear ...
	I0926 18:01:43.072509    4572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:01:43.877924    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:43.878045    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:43.890173    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:43.890261    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:43.902156    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:43.902240    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:43.913833    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:43.913918    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:43.925993    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:43.926084    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:43.937429    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:43.937526    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:43.950024    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:43.950108    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:43.966108    4114 logs.go:276] 0 containers: []
	W0926 18:01:43.966120    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:43.966195    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:43.979432    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:43.979450    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:43.979456    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:43.996939    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:43.996953    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:44.009969    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:44.009984    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:44.025338    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:44.025351    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:44.038220    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:44.038232    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:44.050744    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:44.050757    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:44.075075    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:44.075091    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:44.092736    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:44.092749    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:44.132094    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:44.132121    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:44.148632    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:44.148647    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:44.175721    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:44.175738    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:44.192371    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:44.192383    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:44.204590    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:44.204606    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:44.217021    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:44.217034    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:44.221697    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:44.221706    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:44.234808    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:44.234821    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:44.253427    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:44.253441    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:43.573820    4572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:01:44.074542    4572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:01:44.078899    4572 api_server.go:72] duration metric: took 1.006521167s to wait for apiserver process to appear ...
	I0926 18:01:44.078908    4572 api_server.go:88] waiting for apiserver healthz status ...
	I0926 18:01:44.078924    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:46.791907    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:49.080966    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:49.081074    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:51.794042    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:51.794389    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:51.822315    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:51.822457    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:51.839536    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:51.839634    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:51.853404    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:51.853486    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:51.865316    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:51.865398    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:51.875821    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:51.875909    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:51.886545    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:51.886628    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:51.896418    4114 logs.go:276] 0 containers: []
	W0926 18:01:51.896433    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:51.896494    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:51.906980    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:51.906998    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:51.907003    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:51.924210    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:51.924227    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:51.941027    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:01:51.941038    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:01:51.953209    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:51.953220    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:51.965479    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:51.965489    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:51.977207    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:51.977217    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:51.999756    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:51.999764    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:52.013854    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:52.013865    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:52.018148    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:52.018155    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:01:52.053488    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:52.053499    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:52.078429    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:01:52.078439    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:01:52.094421    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:01:52.094436    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:01:52.129589    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:52.129597    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:52.145363    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:01:52.145376    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:01:52.161919    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:52.161931    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:52.178338    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:52.178349    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:52.191004    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:01:52.191013    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:01:54.081837    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:54.081891    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:54.704775    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:59.082552    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:59.082626    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:59.705397    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:59.705497    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:01:59.720155    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:01:59.720236    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:01:59.731324    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:01:59.731413    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:01:59.746671    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:01:59.746758    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:01:59.757199    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:01:59.757288    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:01:59.769130    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:01:59.769212    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:01:59.779676    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:01:59.779759    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:01:59.793564    4114 logs.go:276] 0 containers: []
	W0926 18:01:59.793576    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:01:59.793649    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:01:59.804330    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:01:59.804347    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:01:59.804352    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:01:59.825641    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:01:59.825655    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:01:59.838555    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:01:59.838569    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:01:59.850328    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:01:59.850338    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:01:59.862512    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:01:59.862526    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:01:59.874722    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:01:59.874736    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:01:59.889670    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:01:59.889681    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:01:59.903701    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:01:59.903716    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:01:59.928522    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:01:59.928537    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:01:59.943431    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:01:59.943446    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:01:59.965639    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:01:59.965654    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:01:59.969970    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:01:59.969977    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:02:00.006777    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:02:00.006786    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:02:00.021632    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:02:00.021646    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:02:00.039253    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:02:00.039264    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:02:00.050861    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:02:00.050876    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:02:00.067124    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:02:00.067136    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:02:02.607071    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:04.083435    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:04.083518    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:07.607400    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:07.607703    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:02:07.636708    4114 logs.go:276] 2 containers: [cc4a850690a9 936423c2e273]
	I0926 18:02:07.636810    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:02:07.650233    4114 logs.go:276] 2 containers: [44a2723bec83 6536b1c9a022]
	I0926 18:02:07.650316    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:02:07.662468    4114 logs.go:276] 1 containers: [298c45e4bf8c]
	I0926 18:02:07.662541    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:02:07.672941    4114 logs.go:276] 2 containers: [0f8928a1653b 6ebd37f8910f]
	I0926 18:02:07.673023    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:02:07.683618    4114 logs.go:276] 1 containers: [0abea972e936]
	I0926 18:02:07.683706    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:02:07.694091    4114 logs.go:276] 2 containers: [2177338a4ad0 8624e6cc00e0]
	I0926 18:02:07.694181    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:02:07.704150    4114 logs.go:276] 0 containers: []
	W0926 18:02:07.704162    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:02:07.704228    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:02:07.714572    4114 logs.go:276] 2 containers: [a12b3a4b0ff8 9bcb671251a4]
	I0926 18:02:07.714590    4114 logs.go:123] Gathering logs for kube-apiserver [cc4a850690a9] ...
	I0926 18:02:07.714595    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc4a850690a9"
	I0926 18:02:07.727989    4114 logs.go:123] Gathering logs for kube-controller-manager [2177338a4ad0] ...
	I0926 18:02:07.727999    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177338a4ad0"
	I0926 18:02:07.744769    4114 logs.go:123] Gathering logs for kube-controller-manager [8624e6cc00e0] ...
	I0926 18:02:07.744784    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8624e6cc00e0"
	I0926 18:02:07.756223    4114 logs.go:123] Gathering logs for kube-scheduler [6ebd37f8910f] ...
	I0926 18:02:07.756232    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ebd37f8910f"
	I0926 18:02:07.771362    4114 logs.go:123] Gathering logs for storage-provisioner [a12b3a4b0ff8] ...
	I0926 18:02:07.771377    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a12b3a4b0ff8"
	I0926 18:02:07.783205    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:02:07.783215    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:02:07.795438    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:02:07.795454    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:02:07.830472    4114 logs.go:123] Gathering logs for etcd [44a2723bec83] ...
	I0926 18:02:07.830480    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44a2723bec83"
	I0926 18:02:07.844767    4114 logs.go:123] Gathering logs for kube-scheduler [0f8928a1653b] ...
	I0926 18:02:07.844778    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8928a1653b"
	I0926 18:02:07.857250    4114 logs.go:123] Gathering logs for kube-proxy [0abea972e936] ...
	I0926 18:02:07.857262    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0abea972e936"
	I0926 18:02:07.869440    4114 logs.go:123] Gathering logs for storage-provisioner [9bcb671251a4] ...
	I0926 18:02:07.869451    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bcb671251a4"
	I0926 18:02:07.880634    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:02:07.880643    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:02:07.885345    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:02:07.885355    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:02:07.920265    4114 logs.go:123] Gathering logs for coredns [298c45e4bf8c] ...
	I0926 18:02:07.920278    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 298c45e4bf8c"
	I0926 18:02:07.933116    4114 logs.go:123] Gathering logs for kube-apiserver [936423c2e273] ...
	I0926 18:02:07.933128    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 936423c2e273"
	I0926 18:02:07.957558    4114 logs.go:123] Gathering logs for etcd [6536b1c9a022] ...
	I0926 18:02:07.957567    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6536b1c9a022"
	I0926 18:02:07.972128    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:02:07.972138    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:02:09.084785    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:09.084835    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:10.497467    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:15.499797    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:15.499962    4114 kubeadm.go:597] duration metric: took 4m4.51839775s to restartPrimaryControlPlane
	W0926 18:02:15.500121    4114 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0926 18:02:15.500165    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0926 18:02:16.554566    4114 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.054408083s)
	I0926 18:02:16.554646    4114 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 18:02:16.559474    4114 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 18:02:16.562329    4114 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 18:02:16.565033    4114 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 18:02:16.565040    4114 kubeadm.go:157] found existing configuration files:
	
	I0926 18:02:16.565062    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/admin.conf
	I0926 18:02:16.567509    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 18:02:16.567540    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 18:02:16.569915    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/kubelet.conf
	I0926 18:02:16.572851    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 18:02:16.572885    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 18:02:16.576060    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/controller-manager.conf
	I0926 18:02:16.578882    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 18:02:16.578913    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 18:02:16.581813    4114 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/scheduler.conf
	I0926 18:02:16.584866    4114 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 18:02:16.584892    4114 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 18:02:16.588191    4114 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0926 18:02:16.605009    4114 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0926 18:02:16.605151    4114 kubeadm.go:310] [preflight] Running pre-flight checks
	I0926 18:02:16.660826    4114 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 18:02:16.660879    4114 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 18:02:16.660929    4114 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0926 18:02:16.712964    4114 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 18:02:16.717008    4114 out.go:235]   - Generating certificates and keys ...
	I0926 18:02:16.717044    4114 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0926 18:02:16.717078    4114 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0926 18:02:16.717123    4114 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0926 18:02:16.717156    4114 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0926 18:02:16.717189    4114 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0926 18:02:16.717216    4114 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0926 18:02:16.717248    4114 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0926 18:02:16.717281    4114 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0926 18:02:16.717323    4114 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0926 18:02:16.717378    4114 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0926 18:02:16.717399    4114 kubeadm.go:310] [certs] Using the existing "sa" key
	I0926 18:02:16.717427    4114 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 18:02:16.775923    4114 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 18:02:16.829960    4114 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 18:02:17.066595    4114 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 18:02:17.216810    4114 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 18:02:17.245511    4114 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 18:02:17.245760    4114 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 18:02:17.245888    4114 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0926 18:02:17.334473    4114 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 18:02:14.086307    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:14.086396    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:17.337503    4114 out.go:235]   - Booting up control plane ...
	I0926 18:02:17.337589    4114 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 18:02:17.337646    4114 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 18:02:17.337709    4114 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 18:02:17.337772    4114 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 18:02:17.338064    4114 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0926 18:02:21.838741    4114 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501433 seconds
	I0926 18:02:21.838807    4114 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 18:02:21.843851    4114 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 18:02:22.356821    4114 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 18:02:22.357082    4114 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-937000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 18:02:22.859809    4114 kubeadm.go:310] [bootstrap-token] Using token: 5ikksf.pbrpxtw98s1hgyjs
	I0926 18:02:19.087980    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:19.088005    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:22.865766    4114 out.go:235]   - Configuring RBAC rules ...
	I0926 18:02:22.865838    4114 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 18:02:22.865886    4114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 18:02:22.873896    4114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 18:02:22.874679    4114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 18:02:22.875530    4114 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 18:02:22.876379    4114 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 18:02:22.880014    4114 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 18:02:23.057922    4114 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0926 18:02:23.263562    4114 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0926 18:02:23.264121    4114 kubeadm.go:310] 
	I0926 18:02:23.264156    4114 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0926 18:02:23.264159    4114 kubeadm.go:310] 
	I0926 18:02:23.264194    4114 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0926 18:02:23.264200    4114 kubeadm.go:310] 
	I0926 18:02:23.264215    4114 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0926 18:02:23.264244    4114 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 18:02:23.264268    4114 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 18:02:23.264271    4114 kubeadm.go:310] 
	I0926 18:02:23.264300    4114 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0926 18:02:23.264371    4114 kubeadm.go:310] 
	I0926 18:02:23.264406    4114 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 18:02:23.264409    4114 kubeadm.go:310] 
	I0926 18:02:23.264451    4114 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0926 18:02:23.264502    4114 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 18:02:23.264610    4114 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 18:02:23.264617    4114 kubeadm.go:310] 
	I0926 18:02:23.264665    4114 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 18:02:23.264764    4114 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0926 18:02:23.264770    4114 kubeadm.go:310] 
	I0926 18:02:23.264825    4114 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5ikksf.pbrpxtw98s1hgyjs \
	I0926 18:02:23.264882    4114 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fda44b3178e2a9a18cad0c3f133cc2773c24b77ff2472c5e9e47121699490a5 \
	I0926 18:02:23.264893    4114 kubeadm.go:310] 	--control-plane 
	I0926 18:02:23.264896    4114 kubeadm.go:310] 
	I0926 18:02:23.264945    4114 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0926 18:02:23.264951    4114 kubeadm.go:310] 
	I0926 18:02:23.264996    4114 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5ikksf.pbrpxtw98s1hgyjs \
	I0926 18:02:23.265052    4114 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fda44b3178e2a9a18cad0c3f133cc2773c24b77ff2472c5e9e47121699490a5 
	I0926 18:02:23.265127    4114 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 18:02:23.265136    4114 cni.go:84] Creating CNI manager for ""
	I0926 18:02:23.265143    4114 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:02:23.270699    4114 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 18:02:23.278734    4114 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 18:02:23.282036    4114 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 18:02:23.287509    4114 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 18:02:23.287574    4114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-937000 minikube.k8s.io/updated_at=2024_09_26T18_02_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=running-upgrade-937000 minikube.k8s.io/primary=true
	I0926 18:02:23.287575    4114 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 18:02:23.317258    4114 kubeadm.go:1113] duration metric: took 29.734417ms to wait for elevateKubeSystemPrivileges
	I0926 18:02:23.329440    4114 ops.go:34] apiserver oom_adj: -16
	I0926 18:02:23.329549    4114 kubeadm.go:394] duration metric: took 4m12.365161042s to StartCluster
	I0926 18:02:23.329563    4114 settings.go:142] acquiring lock: {Name:mk68436efc4e8fe170d744b4cebdb7ddef61f64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:02:23.329657    4114 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:02:23.330015    4114 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/kubeconfig: {Name:mk9560fb3377d007cf139de436457ca7aa0f8d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:02:23.330195    4114 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:02:23.330219    4114 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 18:02:23.330261    4114 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-937000"
	I0926 18:02:23.330269    4114 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-937000"
	W0926 18:02:23.330273    4114 addons.go:243] addon storage-provisioner should already be in state true
	I0926 18:02:23.330286    4114 host.go:66] Checking if "running-upgrade-937000" exists ...
	I0926 18:02:23.330289    4114 config.go:182] Loaded profile config "running-upgrade-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:02:23.330305    4114 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-937000"
	I0926 18:02:23.330337    4114 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-937000"
	I0926 18:02:23.331208    4114 kapi.go:59] client config for running-upgrade-937000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/running-upgrade-937000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106156570), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 18:02:23.331332    4114 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-937000"
	W0926 18:02:23.331337    4114 addons.go:243] addon default-storageclass should already be in state true
	I0926 18:02:23.331344    4114 host.go:66] Checking if "running-upgrade-937000" exists ...
	I0926 18:02:23.334660    4114 out.go:177] * Verifying Kubernetes components...
	I0926 18:02:23.335024    4114 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 18:02:23.338723    4114 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 18:02:23.338730    4114 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/running-upgrade-937000/id_rsa Username:docker}
	I0926 18:02:23.342665    4114 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:02:23.346749    4114 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:02:23.350659    4114 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 18:02:23.350666    4114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 18:02:23.350672    4114 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/running-upgrade-937000/id_rsa Username:docker}
	I0926 18:02:23.437211    4114 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 18:02:23.442704    4114 api_server.go:52] waiting for apiserver process to appear ...
	I0926 18:02:23.442748    4114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:02:23.446509    4114 api_server.go:72] duration metric: took 116.306833ms to wait for apiserver process to appear ...
	I0926 18:02:23.446517    4114 api_server.go:88] waiting for apiserver healthz status ...
	I0926 18:02:23.446524    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:23.473940    4114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 18:02:23.511286    4114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 18:02:23.816518    4114 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 18:02:23.816531    4114 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 18:02:24.090149    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:24.090184    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:28.447684    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:28.447732    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:29.092337    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:29.092389    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:33.448368    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:33.448415    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:34.094562    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:34.094599    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:38.448563    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:38.448602    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:39.096744    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:39.096807    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:43.449163    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:43.449185    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:44.098823    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:44.099350    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:02:44.136543    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:02:44.136704    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:02:44.157249    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:02:44.157371    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:02:44.172776    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:02:44.172874    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:02:44.185523    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:02:44.185603    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:02:44.196606    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:02:44.196677    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:02:44.207299    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:02:44.207367    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:02:44.220676    4572 logs.go:276] 0 containers: []
	W0926 18:02:44.220700    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:02:44.220772    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:02:44.231211    4572 logs.go:276] 0 containers: []
	W0926 18:02:44.231222    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:02:44.231230    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:02:44.231235    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:02:44.243329    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:02:44.243338    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:02:44.283586    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:02:44.283597    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:02:44.299062    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:02:44.299073    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:02:44.316346    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:02:44.316357    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:02:44.330540    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:02:44.330551    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:02:44.356227    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:02:44.356236    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:02:44.397894    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:02:44.397904    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:02:44.409057    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:02:44.409068    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:02:44.421137    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:02:44.421149    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:02:44.438738    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:02:44.438749    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:02:44.452630    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:02:44.452645    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:02:44.457234    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:02:44.457243    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:02:44.536201    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:02:44.536215    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:02:44.554769    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:02:44.554790    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:02:47.072587    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:48.449572    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:48.449628    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:52.074787    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:52.075008    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:02:52.091254    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:02:52.091343    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:02:52.104292    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:02:52.104381    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:02:52.121325    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:02:52.121409    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:02:52.136836    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:02:52.136931    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:02:52.147065    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:02:52.147137    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:02:52.161853    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:02:52.161926    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:02:52.172721    4572 logs.go:276] 0 containers: []
	W0926 18:02:52.172739    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:02:52.172813    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:02:52.183521    4572 logs.go:276] 0 containers: []
	W0926 18:02:52.183532    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:02:52.183538    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:02:52.183543    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:02:52.195170    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:02:52.195182    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:02:52.209930    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:02:52.209943    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:02:52.221683    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:02:52.221695    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:02:52.239064    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:02:52.239078    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:02:52.252559    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:02:52.252570    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:02:52.279518    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:02:52.279528    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:02:52.318588    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:02:52.318601    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:02:52.332560    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:02:52.332569    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:02:52.346726    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:02:52.346740    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:02:52.382846    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:02:52.382857    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:02:52.396570    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:02:52.396590    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:02:52.433852    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:02:52.433860    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:02:52.437823    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:02:52.437829    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:02:52.449282    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:02:52.449294    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:02:53.450201    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:53.450260    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0926 18:02:53.816204    4114 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0926 18:02:53.820441    4114 out.go:177] * Enabled addons: storage-provisioner
	I0926 18:02:53.828333    4114 addons.go:510] duration metric: took 30.498965375s for enable addons: enabled=[storage-provisioner]
	I0926 18:02:54.962470    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:58.451027    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:58.451067    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:59.963954    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:59.964121    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:02:59.981318    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:02:59.981418    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:02:59.993937    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:02:59.994060    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:00.004576    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:00.004655    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:00.014909    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:00.014980    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:00.026166    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:00.026246    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:00.036682    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:00.036762    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:00.046836    4572 logs.go:276] 0 containers: []
	W0926 18:03:00.046849    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:00.046918    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:00.065099    4572 logs.go:276] 0 containers: []
	W0926 18:03:00.065112    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:00.065120    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:00.065126    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:00.090324    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:00.090335    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:00.129049    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:00.129058    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:00.133186    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:00.133193    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:00.167681    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:00.167695    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:00.181626    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:00.181637    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:00.193144    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:00.193157    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:00.213070    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:00.213080    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:00.226419    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:00.226430    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:00.113806    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:00.113816    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:00.125743    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:00.125757    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:00.147930    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:00.147942    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:00.160175    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:00.160186    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:00.198536    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:00.198549    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:00.213703    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:00.213714    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:02.727600    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:03.325213    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:03.325246    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:07.729810    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:07.729975    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:07.741271    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:07.741373    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:07.752185    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:07.752277    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:07.762660    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:07.762747    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:07.773216    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:07.773308    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:07.783998    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:07.784077    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:07.794816    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:07.794887    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:07.805353    4572 logs.go:276] 0 containers: []
	W0926 18:03:07.805366    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:07.805440    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:07.816445    4572 logs.go:276] 0 containers: []
	W0926 18:03:07.816459    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:07.816467    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:07.816473    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:07.820770    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:07.820777    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:07.855300    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:07.855313    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:07.896842    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:07.896854    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:07.910788    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:07.910798    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:07.935056    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:07.935070    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:07.947416    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:07.947427    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:07.986026    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:07.986033    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:08.004048    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:08.004062    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:08.021210    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:08.021219    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:08.033430    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:08.033441    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:08.326384    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:08.326426    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:08.051418    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:08.051429    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:08.062920    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:08.062932    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:08.078201    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:08.078211    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:08.089819    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:08.089832    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:10.604967    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:13.327964    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:13.327992    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:15.607167    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:15.607497    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:15.633379    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:15.633505    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:15.651534    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:15.651629    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:15.665296    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:15.665381    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:15.676935    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:15.677018    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:15.687559    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:15.687636    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:15.702772    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:15.702849    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:15.712806    4572 logs.go:276] 0 containers: []
	W0926 18:03:15.712819    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:15.712891    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:15.722858    4572 logs.go:276] 0 containers: []
	W0926 18:03:15.722869    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:15.722879    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:15.722884    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:15.761674    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:15.761684    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:15.775512    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:15.775521    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:15.793834    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:15.793844    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:15.808766    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:15.808780    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:15.819856    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:15.819868    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:15.831589    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:15.831603    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:15.846749    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:15.846760    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:15.882711    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:15.882722    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:15.908424    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:15.908432    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:15.912457    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:15.912464    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:15.929637    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:15.929646    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:15.942854    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:15.942864    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:15.980733    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:15.980748    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:15.993099    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:15.993114    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:18.329919    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:18.329958    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:18.507117    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:23.331893    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:23.332067    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:23.342642    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:03:23.342727    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:23.354865    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:03:23.354951    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:23.365616    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:03:23.365688    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:23.375776    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:03:23.375860    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:23.386137    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:03:23.386216    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:23.396249    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:03:23.396332    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:23.405751    4114 logs.go:276] 0 containers: []
	W0926 18:03:23.405763    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:23.405832    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:23.419911    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:03:23.419926    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:23.419931    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:23.424230    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:23.424240    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:23.464027    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:03:23.464041    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:03:23.478617    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:03:23.478627    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:03:23.490325    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:03:23.490336    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:03:23.508251    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:03:23.508262    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:03:23.520706    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:23.520715    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:23.546004    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:03:23.546021    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:23.558340    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:23.558353    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:23.594909    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:03:23.594924    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:03:23.610017    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:03:23.610026    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:03:23.622127    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:03:23.622138    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:03:23.637572    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:03:23.637582    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:03:23.508383    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:23.508473    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:23.520529    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:23.520616    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:23.531746    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:23.531833    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:23.542285    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:23.542366    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:23.554261    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:23.554346    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:23.566258    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:23.566418    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:23.577864    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:23.577944    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:23.588431    4572 logs.go:276] 0 containers: []
	W0926 18:03:23.588442    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:23.588510    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:23.598947    4572 logs.go:276] 0 containers: []
	W0926 18:03:23.598957    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:23.598964    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:23.598969    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:23.639827    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:23.639841    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:23.644439    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:23.644448    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:23.659704    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:23.659715    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:23.678055    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:23.678070    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:23.689236    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:23.689252    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:23.700997    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:23.701013    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:23.713100    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:23.713111    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:23.735573    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:23.735588    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:23.760737    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:23.760747    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:23.795005    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:23.795018    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:23.833737    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:23.833751    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:23.845640    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:23.845651    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:23.863073    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:23.863087    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:23.876201    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:23.876212    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:26.393881    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:26.157137    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:31.396179    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:31.396292    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:31.408478    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:31.408563    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:31.426206    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:31.426289    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:31.438437    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:31.438527    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:31.450462    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:31.450553    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:31.462255    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:31.462337    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:31.474106    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:31.474186    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:31.485546    4572 logs.go:276] 0 containers: []
	W0926 18:03:31.485560    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:31.485640    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:31.510839    4572 logs.go:276] 0 containers: []
	W0926 18:03:31.510853    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:31.510861    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:31.510867    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:31.514931    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:31.514938    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:31.555172    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:31.555187    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:31.569378    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:31.569390    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:31.583517    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:31.583531    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:31.597694    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:31.597709    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:31.614769    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:31.614783    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:31.638298    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:31.638312    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:31.650902    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:31.650917    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:31.662849    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:31.662864    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:31.674491    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:31.674506    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:31.688315    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:31.688330    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:31.725636    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:31.725644    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:31.761749    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:31.761760    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:31.773638    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:31.773649    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:31.159674    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:31.159870    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:31.172387    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:03:31.172479    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:31.182605    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:03:31.182679    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:31.193238    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:03:31.193317    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:31.203868    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:03:31.203955    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:31.214457    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:03:31.214540    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:31.228008    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:03:31.228094    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:31.237785    4114 logs.go:276] 0 containers: []
	W0926 18:03:31.237798    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:31.237872    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:31.248063    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:03:31.248077    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:31.248083    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:31.282303    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:31.282313    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:31.286769    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:03:31.286778    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:03:31.298188    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:03:31.298199    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:03:31.316817    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:03:31.316831    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:03:31.334247    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:31.334258    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:31.357664    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:03:31.357672    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:31.370184    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:31.370195    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:31.410239    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:03:31.410252    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:03:31.426627    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:03:31.426636    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:03:31.443201    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:03:31.443216    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:03:31.455824    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:03:31.455838    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:03:31.475446    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:03:31.475460    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:03:33.989704    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:34.290849    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:38.992125    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:38.992416    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:39.016966    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:03:39.017069    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:39.031730    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:03:39.031824    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:39.044123    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:03:39.044212    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:39.056324    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:03:39.056403    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:39.066842    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:03:39.066928    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:39.076991    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:03:39.077071    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:39.087619    4114 logs.go:276] 0 containers: []
	W0926 18:03:39.087634    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:39.087701    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:39.098229    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:03:39.098247    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:03:39.098253    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:03:39.113128    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:03:39.113137    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:03:39.125032    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:39.125043    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:39.159110    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:39.159119    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:39.163254    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:39.163261    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:39.197724    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:03:39.197736    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:03:39.212282    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:03:39.212293    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:03:39.224329    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:03:39.224340    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:03:39.235884    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:03:39.235895    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:03:39.253153    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:03:39.253163    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:03:39.264895    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:39.264906    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:39.290071    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:03:39.290080    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:39.302196    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:03:39.302209    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:03:39.292932    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:39.293025    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:39.304344    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:39.304434    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:39.316034    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:39.316117    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:39.326409    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:39.326492    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:39.336634    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:39.336717    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:39.347252    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:39.347324    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:39.360130    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:39.360210    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:39.370232    4572 logs.go:276] 0 containers: []
	W0926 18:03:39.370250    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:39.370322    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:39.380401    4572 logs.go:276] 0 containers: []
	W0926 18:03:39.380413    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:39.380420    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:39.380426    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:39.417369    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:39.417381    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:39.434946    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:39.434959    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:39.459585    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:39.459592    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:39.463742    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:39.463749    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:39.502623    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:39.502649    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:39.517111    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:39.517121    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:39.532594    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:39.532607    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:39.546409    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:39.546419    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:39.559923    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:39.559933    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:39.573921    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:39.573932    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:39.590842    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:39.590855    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:39.602591    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:39.602601    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:39.641175    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:39.641185    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:39.655357    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:39.655368    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:42.172089    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:41.818629    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:47.174155    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:47.174317    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:47.185409    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:47.185494    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:47.196373    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:47.196461    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:47.207224    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:47.207305    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:47.217348    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:47.217430    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:47.235285    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:47.235370    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:47.245954    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:47.246037    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:47.260637    4572 logs.go:276] 0 containers: []
	W0926 18:03:47.260648    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:47.260728    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:47.271529    4572 logs.go:276] 0 containers: []
	W0926 18:03:47.271542    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:47.271551    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:47.271556    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:47.308971    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:47.308980    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:47.346361    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:47.346372    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:47.361054    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:47.361067    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:47.375959    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:47.375969    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:47.387433    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:47.387445    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:47.406472    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:47.406483    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:47.429524    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:47.429530    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:47.441159    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:47.441170    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:47.454080    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:47.454090    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:47.489626    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:47.489635    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:47.503801    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:47.503811    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:47.518297    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:47.518308    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:47.522423    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:47.522429    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:47.536592    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:47.536611    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:46.818684    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:46.818805    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:46.829554    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:03:46.829633    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:46.840407    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:03:46.840488    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:46.850721    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:03:46.850796    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:46.860979    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:03:46.861051    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:46.871206    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:03:46.871297    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:46.881319    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:03:46.881387    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:46.890839    4114 logs.go:276] 0 containers: []
	W0926 18:03:46.890849    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:46.890915    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:46.901568    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:03:46.901584    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:46.901589    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:46.935706    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:46.935715    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:46.940438    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:46.940444    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:46.975271    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:03:46.975284    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:03:46.995978    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:03:46.995990    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:03:47.007958    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:03:47.007974    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:03:47.022475    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:03:47.022485    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:03:47.040928    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:03:47.040938    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:03:47.059124    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:03:47.059136    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:03:47.071104    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:03:47.071119    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:03:47.083681    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:03:47.083691    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:03:47.100580    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:47.100590    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:47.123805    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:03:47.123811    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:50.062244    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:49.637237    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:55.064208    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:55.064319    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:55.075868    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:55.075972    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:55.091401    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:55.091487    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:55.101972    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:55.102052    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:55.113148    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:55.113227    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:55.123200    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:55.123285    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:55.133727    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:55.133810    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:55.143732    4572 logs.go:276] 0 containers: []
	W0926 18:03:55.143742    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:55.143810    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:55.154299    4572 logs.go:276] 0 containers: []
	W0926 18:03:55.154310    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:55.154316    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:55.154322    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:55.158600    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:55.158608    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:55.172454    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:55.172468    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:55.184588    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:55.184600    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:55.223994    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:55.224013    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:55.262523    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:55.262535    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:55.276675    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:55.276688    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:55.290841    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:55.290857    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:55.302607    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:55.302620    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:55.325837    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:55.325845    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:55.363212    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:55.363226    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:55.377710    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:55.377722    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:55.389466    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:55.389476    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:55.404687    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:55.404700    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:55.421867    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:55.421881    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:57.935478    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:54.638618    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:54.638917    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:54.662728    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:03:54.662855    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:54.678673    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:03:54.678772    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:54.691289    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:03:54.691376    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:54.702567    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:03:54.702645    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:54.713911    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:03:54.713998    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:54.724433    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:03:54.724522    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:54.734834    4114 logs.go:276] 0 containers: []
	W0926 18:03:54.734849    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:54.734916    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:54.745170    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:03:54.745185    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:03:54.745191    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:03:54.757056    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:54.757068    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:54.781517    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:54.781525    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:54.816697    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:03:54.816713    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:03:54.828283    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:03:54.828300    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:03:54.843062    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:03:54.843079    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:03:54.856650    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:03:54.856661    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:03:54.868435    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:03:54.868445    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:03:54.885725    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:03:54.885738    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:03:54.896754    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:03:54.896767    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:54.908203    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:54.908213    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:54.942806    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:54.942815    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:54.947391    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:03:54.947398    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:03:57.462247    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:02.937504    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:02.937662    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:02.948343    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:02.948430    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:02.958927    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:02.959012    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:02.968970    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:02.969043    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:02.980266    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:02.980348    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:02.991243    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:02.991327    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:03.002937    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:03.003025    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:03.015268    4572 logs.go:276] 0 containers: []
	W0926 18:04:03.015280    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:03.015355    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:03.025857    4572 logs.go:276] 0 containers: []
	W0926 18:04:03.025874    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:03.025880    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:03.025886    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:02.464819    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:02.465362    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:02.498651    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:02.498804    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:02.517864    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:02.517980    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:02.532439    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:04:02.532534    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:02.544848    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:02.544928    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:02.555401    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:02.555491    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:02.566324    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:02.566403    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:02.576411    4114 logs.go:276] 0 containers: []
	W0926 18:04:02.576446    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:02.576526    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:02.587881    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:02.587896    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:02.587902    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:02.607947    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:02.607960    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:02.626967    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:02.626976    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:02.638916    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:02.638930    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:02.654759    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:02.654770    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:02.674155    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:02.674168    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:02.692117    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:02.692130    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:02.704095    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:02.704106    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:02.737714    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:02.737722    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:02.772748    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:02.772757    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:02.786990    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:02.787000    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:02.812886    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:02.812901    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:02.824698    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:02.824711    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:03.045864    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:03.045877    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:03.063505    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:03.063514    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:03.075999    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:03.076013    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:03.090929    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:03.090943    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:03.128232    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:03.128253    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:03.171673    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:03.171684    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:03.185635    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:03.185645    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:03.197199    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:03.197213    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:03.208657    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:03.208667    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:03.213091    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:03.213099    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:03.247374    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:03.247389    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:03.262229    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:03.262242    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:03.279960    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:03.279971    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:03.302753    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:03.302761    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:05.815978    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:05.331379    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:10.817919    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:10.818044    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:10.830548    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:10.830635    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:10.841055    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:10.841147    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:10.851624    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:10.851708    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:10.862386    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:10.862476    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:10.873158    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:10.873245    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:10.883988    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:10.884066    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:10.893983    4572 logs.go:276] 0 containers: []
	W0926 18:04:10.893995    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:10.894063    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:10.904339    4572 logs.go:276] 0 containers: []
	W0926 18:04:10.904353    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:10.904362    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:10.904368    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:10.923952    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:10.923964    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:10.936090    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:10.936104    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:10.958957    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:10.958965    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:10.972022    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:10.972035    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:11.010000    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:11.010016    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:11.043886    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:11.043896    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:11.062126    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:11.062138    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:11.076159    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:11.076170    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:11.091424    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:11.091435    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:11.134001    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:11.134018    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:11.138320    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:11.138327    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:11.153305    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:11.153315    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:11.164512    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:11.164522    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:11.181144    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:11.181154    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:10.333416    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:10.333600    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:10.349803    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:10.349914    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:10.366864    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:10.366947    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:10.377419    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:04:10.377493    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:10.387803    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:10.387898    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:10.398385    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:10.398462    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:10.408789    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:10.408863    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:10.418275    4114 logs.go:276] 0 containers: []
	W0926 18:04:10.418287    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:10.418353    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:10.429077    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:10.429092    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:10.429098    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:10.440761    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:10.440770    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:10.466517    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:10.466527    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:10.504361    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:10.504371    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:10.518192    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:10.518202    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:10.531669    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:10.531680    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:10.546638    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:10.546648    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:10.561585    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:10.561594    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:10.573357    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:10.573366    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:10.609375    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:10.609387    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:10.613741    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:10.613748    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:10.625121    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:10.625137    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:10.642782    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:10.642794    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:13.156511    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:13.696037    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:18.158493    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:18.158697    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:18.174448    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:18.174540    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:18.186275    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:18.186351    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:18.197293    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:04:18.197374    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:18.207211    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:18.207295    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:18.218090    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:18.218170    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:18.228594    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:18.228673    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:18.238835    4114 logs.go:276] 0 containers: []
	W0926 18:04:18.238848    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:18.238915    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:18.249638    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:18.249653    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:18.249658    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:18.263145    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:18.263154    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:18.282006    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:18.282017    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:18.296980    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:18.296991    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:18.309103    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:18.309114    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:18.333719    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:18.333727    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:18.368556    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:18.368563    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:18.372793    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:18.372802    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:18.386951    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:18.386961    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:18.404520    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:18.404531    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:18.421270    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:18.421281    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:18.456618    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:18.456628    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:18.469231    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:18.469241    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:18.698104    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:18.698228    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:18.713931    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:18.714019    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:18.724124    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:18.724209    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:18.738750    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:18.738831    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:18.749342    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:18.749422    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:18.759812    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:18.759901    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:18.772120    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:18.772202    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:18.783407    4572 logs.go:276] 0 containers: []
	W0926 18:04:18.783418    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:18.783490    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:18.794699    4572 logs.go:276] 0 containers: []
	W0926 18:04:18.794711    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:18.794718    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:18.794723    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:18.798734    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:18.798739    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:18.811416    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:18.811431    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:18.826524    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:18.826534    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:18.843264    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:18.843276    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:18.854928    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:18.854943    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:18.891640    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:18.891650    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:18.905493    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:18.905504    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:18.951982    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:18.951993    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:18.966412    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:18.966423    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:19.002650    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:19.002662    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:19.016481    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:19.016495    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:19.027573    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:19.027583    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:19.044865    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:19.044875    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:19.058706    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:19.058716    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:21.585305    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:20.986584    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:26.587432    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:26.587659    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:26.610487    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:26.610617    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:26.626693    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:26.626792    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:26.640599    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:26.640687    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:26.651677    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:26.651764    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:26.662496    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:26.662578    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:26.673028    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:26.673108    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:26.683312    4572 logs.go:276] 0 containers: []
	W0926 18:04:26.683323    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:26.683390    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:26.693764    4572 logs.go:276] 0 containers: []
	W0926 18:04:26.693776    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:26.693784    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:26.693790    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:26.705313    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:26.705326    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:26.720170    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:26.720184    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:26.734859    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:26.734868    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:26.749760    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:26.749775    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:26.761720    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:26.761731    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:26.784686    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:26.784693    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:26.796861    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:26.796876    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:26.836826    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:26.836845    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:26.872418    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:26.872430    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:26.912424    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:26.912436    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:26.924008    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:26.924021    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:26.941228    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:26.941238    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:26.957296    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:26.957306    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:26.961602    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:26.961608    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:25.988739    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:25.988972    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:26.008062    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:26.008169    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:26.022601    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:26.022685    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:26.045857    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:04:26.045947    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:26.060528    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:26.060608    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:26.071455    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:26.071534    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:26.085572    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:26.085659    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:26.098909    4114 logs.go:276] 0 containers: []
	W0926 18:04:26.098923    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:26.098995    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:26.110381    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:26.110396    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:26.110401    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:26.124320    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:26.124332    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:26.141675    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:26.141685    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:26.177261    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:26.177268    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:26.181834    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:26.181843    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:26.221086    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:26.221096    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:26.235404    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:26.235415    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:26.247404    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:26.247415    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:26.271766    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:26.271773    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:26.282847    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:26.282858    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:26.297224    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:26.297234    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:26.312536    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:26.312546    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:26.327518    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:26.327527    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:28.841664    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:29.478195    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:33.843462    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:33.843756    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:33.871340    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:33.871471    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:33.888097    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:33.888195    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:33.901228    4114 logs.go:276] 2 containers: [d2033224d422 400b7e552d08]
	I0926 18:04:33.901315    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:33.918650    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:33.918723    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:33.929054    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:33.929131    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:33.941439    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:33.941516    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:33.955505    4114 logs.go:276] 0 containers: []
	W0926 18:04:33.955516    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:33.955586    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:33.966054    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:33.966069    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:33.966075    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:33.981531    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:33.981546    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:33.999841    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:33.999853    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:34.035483    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:34.035498    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:34.049327    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:34.049340    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:34.065466    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:34.065482    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:34.080278    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:34.080291    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:34.092152    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:34.092166    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:34.103936    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:34.103946    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:34.128727    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:34.128734    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:34.140464    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:34.140474    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:34.175479    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:34.175486    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:34.179665    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:34.179671    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:34.480524    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:34.480784    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:34.503599    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:34.503747    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:34.519858    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:34.519951    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:34.533201    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:34.533288    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:34.543866    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:34.543952    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:34.554630    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:34.554704    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:34.565212    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:34.565296    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:34.576040    4572 logs.go:276] 0 containers: []
	W0926 18:04:34.576052    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:34.576119    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:34.586280    4572 logs.go:276] 0 containers: []
	W0926 18:04:34.586290    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:34.586298    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:34.586303    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:34.599425    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:34.599435    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:34.623791    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:34.623798    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:34.635929    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:34.635939    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:34.652527    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:34.652543    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:34.670869    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:34.670880    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:34.682314    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:34.682324    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:34.704574    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:34.704587    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:34.720418    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:34.720435    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:34.736305    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:34.736316    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:34.749969    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:34.749982    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:34.788759    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:34.788779    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:34.829440    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:34.829454    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:34.846252    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:34.846273    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:34.851363    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:34.851375    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:37.392969    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:36.696876    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:42.395027    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:42.395195    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:42.409260    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:42.409358    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:42.421022    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:42.421108    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:42.431575    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:42.431657    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:42.441734    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:42.441818    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:42.458192    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:42.458275    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:42.468839    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:42.468917    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:42.479620    4572 logs.go:276] 0 containers: []
	W0926 18:04:42.479631    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:42.479706    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:42.489463    4572 logs.go:276] 0 containers: []
	W0926 18:04:42.489475    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:42.489484    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:42.489490    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:42.500631    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:42.500643    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:42.512431    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:42.512442    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:42.551527    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:42.551538    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:42.565132    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:42.565142    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:42.581769    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:42.581780    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:42.593602    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:42.593611    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:42.609145    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:42.609155    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:42.621252    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:42.621266    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:42.645440    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:42.645450    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:42.659261    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:42.659273    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:42.672102    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:42.672112    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:42.676671    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:42.676681    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:42.712806    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:42.712821    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:42.730823    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:42.730838    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:41.698854    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:41.699087    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:41.714696    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:41.714793    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:41.726596    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:41.726674    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:41.737634    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:04:41.737717    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:41.748614    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:41.748698    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:41.759387    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:41.759475    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:41.770231    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:41.770312    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:41.782900    4114 logs.go:276] 0 containers: []
	W0926 18:04:41.782912    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:41.782985    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:41.793672    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:41.793690    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:41.793696    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:41.834777    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:04:41.834788    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:04:41.849371    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:41.849381    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:41.863935    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:41.863950    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:41.875989    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:04:41.876000    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:04:41.887556    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:41.887566    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:41.899315    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:41.899327    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:41.922662    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:41.922670    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:41.936834    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:41.936846    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:41.949940    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:41.949952    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:41.987071    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:41.987086    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:41.992044    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:41.992053    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:42.006206    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:42.006218    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:42.025479    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:42.025488    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:42.040599    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:42.040615    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:45.271098    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:44.560410    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:50.271594    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:50.271777    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:50.282930    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:50.283015    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:50.293530    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:50.293601    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:50.304138    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:50.304217    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:50.314501    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:50.314577    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:50.326819    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:50.326901    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:50.337705    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:50.337787    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:50.348242    4572 logs.go:276] 0 containers: []
	W0926 18:04:50.348257    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:50.348325    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:50.358694    4572 logs.go:276] 0 containers: []
	W0926 18:04:50.358704    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:50.358712    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:50.358718    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:50.397203    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:50.397211    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:50.401139    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:50.401144    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:50.415219    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:50.415228    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:50.427512    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:50.427522    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:50.441309    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:50.441319    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:50.453064    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:50.453075    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:50.465016    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:50.465031    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:50.476524    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:50.476536    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:50.493831    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:50.493842    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:50.506724    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:50.506734    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:50.529486    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:50.529494    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:50.566162    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:50.566177    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:50.584260    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:50.584274    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:50.622844    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:50.622855    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:49.562926    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:49.563423    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:49.597137    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:49.597303    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:49.615713    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:49.615826    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:49.630741    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:04:49.630842    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:49.643511    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:49.643593    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:49.654264    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:49.654341    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:49.665019    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:49.665104    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:49.675298    4114 logs.go:276] 0 containers: []
	W0926 18:04:49.675309    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:49.675381    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:49.686100    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:49.686116    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:04:49.686122    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:04:49.698047    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:49.698058    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:49.710163    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:49.710173    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:49.733830    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:49.733837    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:49.767542    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:49.767557    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:49.781362    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:49.781372    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:49.799314    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:49.799324    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:49.803847    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:49.803852    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:49.818692    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:04:49.818701    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:04:49.830556    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:49.830569    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:49.842470    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:49.842483    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:49.876357    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:49.876367    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:49.895150    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:49.895160    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:49.906595    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:49.906606    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:49.918537    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:49.918548    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:52.431276    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:53.142762    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:57.433590    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:57.434121    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:57.477785    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:04:57.477948    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:57.499473    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:04:57.499595    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:57.515285    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:04:57.515378    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:57.527759    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:04:57.527837    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:57.539213    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:04:57.539291    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:57.550142    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:04:57.550232    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:57.561207    4114 logs.go:276] 0 containers: []
	W0926 18:04:57.561218    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:57.561287    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:57.572190    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:04:57.572210    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:57.572216    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:57.607937    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:57.607945    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:57.633663    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:57.633671    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:57.638220    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:04:57.638228    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:04:57.652741    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:04:57.652752    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:04:57.664435    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:04:57.664447    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:04:57.676099    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:04:57.676109    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:04:57.688363    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:04:57.688375    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:04:57.700520    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:04:57.700530    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:04:57.714884    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:04:57.714896    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:04:57.727117    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:04:57.727129    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:04:57.745770    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:04:57.745780    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:57.757609    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:57.757620    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:57.795226    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:04:57.795236    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:04:57.807230    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:04:57.807244    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:04:58.144783    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:58.144941    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:58.156289    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:58.156370    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:58.166481    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:58.166568    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:58.178004    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:58.178091    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:58.189928    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:58.190015    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:58.201108    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:58.201188    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:58.211796    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:58.211868    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:58.221945    4572 logs.go:276] 0 containers: []
	W0926 18:04:58.221957    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:58.222030    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:58.234850    4572 logs.go:276] 0 containers: []
	W0926 18:04:58.234862    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:58.234869    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:58.234875    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:58.239572    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:58.239587    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:58.254316    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:58.254331    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:58.265678    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:58.265687    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:58.278474    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:58.278489    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:58.314297    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:58.314307    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:58.328546    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:58.328556    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:58.343647    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:58.343658    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:58.367942    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:58.367952    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:58.379188    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:58.379201    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:58.391918    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:58.391929    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:58.415611    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:58.415619    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:58.454173    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:58.454189    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:58.493133    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:58.493147    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:58.510295    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:58.510310    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:01.024304    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:00.324323    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:06.026434    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:06.026701    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:06.046850    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:05:06.046957    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:06.066530    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:05:06.066607    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:06.077336    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:05:06.077415    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:06.087972    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:05:06.088056    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:06.111341    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:05:06.111422    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:06.127244    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:05:06.127334    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:06.141008    4572 logs.go:276] 0 containers: []
	W0926 18:05:06.141024    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:06.141084    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:06.152001    4572 logs.go:276] 0 containers: []
	W0926 18:05:06.152013    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:05:06.152021    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:05:06.152027    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:05:06.163492    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:05:06.163503    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:05:06.175261    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:05:06.175276    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:05:06.186973    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:05:06.186984    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:05:06.204419    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:06.204429    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:06.228010    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:05:06.228018    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:06.239746    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:05:06.239757    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:05:06.254798    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:05:06.254811    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:05:06.268869    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:05:06.268882    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:05:06.306719    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:05:06.306733    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:05:06.321748    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:05:06.321760    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:05:06.334441    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:06.334453    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:06.373439    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:05:06.373449    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:05:06.387319    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:06.387330    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:06.391682    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:06.391688    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:05.326352    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:05.326601    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:05.348005    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:05.348144    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:05.362442    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:05.362532    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:05.374737    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:05.374828    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:05.389773    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:05.389853    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:05.400617    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:05.400701    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:05.411160    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:05.411237    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:05.421073    4114 logs.go:276] 0 containers: []
	W0926 18:05:05.421084    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:05.421148    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:05.431273    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:05.431291    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:05.431297    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:05.445594    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:05.445605    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:05.456917    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:05.456927    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:05.473092    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:05.473101    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:05.484490    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:05.484498    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:05.519570    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:05.519581    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:05.531780    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:05.531789    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:05.555238    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:05.555247    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:05.578999    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:05.579008    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:05.583416    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:05.583423    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:05.594978    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:05.594989    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:05.606367    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:05.606378    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:05.623120    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:05.623130    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:05.636948    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:05.636959    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:05.672530    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:05.672541    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:08.191365    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:08.928217    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:13.192400    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:13.192542    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:13.208957    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:13.209048    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:13.219933    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:13.220005    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:13.231392    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:13.231481    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:13.242277    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:13.242354    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:13.252635    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:13.252710    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:13.262922    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:13.262996    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:13.274758    4114 logs.go:276] 0 containers: []
	W0926 18:05:13.274772    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:13.274843    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:13.284820    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:13.284839    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:13.284844    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:13.298702    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:13.298712    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:13.309813    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:13.309823    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:13.322752    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:13.322763    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:13.337511    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:13.337524    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:13.372207    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:13.372217    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:13.407003    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:13.407013    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:13.418787    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:13.418800    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:13.423235    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:13.423240    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:13.446750    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:13.446757    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:13.463730    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:13.463741    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:13.481389    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:13.481399    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:13.492570    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:13.492581    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:13.506227    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:13.506237    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:13.518531    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:13.518543    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:13.930530    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:13.930837    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:13.959011    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:05:13.959140    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:13.977231    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:05:13.977341    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:13.990727    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:05:13.990820    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:14.004158    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:05:14.004244    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:14.014392    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:05:14.014473    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:14.025148    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:05:14.025232    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:14.035279    4572 logs.go:276] 0 containers: []
	W0926 18:05:14.035290    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:14.035365    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:14.046133    4572 logs.go:276] 0 containers: []
	W0926 18:05:14.046145    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:05:14.046153    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:05:14.046159    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:05:14.060447    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:05:14.060457    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:14.071937    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:14.071953    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:14.076275    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:05:14.076285    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:05:14.114586    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:14.114599    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:14.138002    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:05:14.138016    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:05:14.153619    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:05:14.153632    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:05:14.165939    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:05:14.165954    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:05:14.178185    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:05:14.178196    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:05:14.196431    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:14.196447    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:14.232652    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:05:14.232668    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:05:14.247227    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:05:14.247240    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:05:14.258943    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:05:14.258955    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:05:14.272384    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:14.272396    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:14.311453    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:05:14.311461    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:05:16.826879    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:16.032559    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:21.828643    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:21.828801    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:21.840745    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:05:21.840836    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:21.851404    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:05:21.851497    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:21.862660    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:05:21.862748    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:21.873801    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:05:21.873884    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:21.884692    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:05:21.884774    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:21.901660    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:05:21.901736    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:21.912511    4572 logs.go:276] 0 containers: []
	W0926 18:05:21.912526    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:21.912600    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:21.922482    4572 logs.go:276] 0 containers: []
	W0926 18:05:21.922499    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:05:21.922508    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:05:21.922513    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:05:21.938279    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:05:21.938294    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:05:21.951081    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:21.951095    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:21.988135    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:05:21.988141    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:05:22.000812    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:05:22.000823    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:05:22.029368    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:22.029378    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:22.051697    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:05:22.051707    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:22.063267    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:05:22.063280    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:05:22.076918    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:05:22.076933    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:05:22.091358    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:05:22.091370    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:05:22.102493    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:05:22.102504    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:05:22.117650    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:22.117662    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:22.122202    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:22.122210    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:22.157591    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:05:22.157604    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:05:22.196818    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:05:22.196830    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:05:21.033545    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:21.033887    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:21.056311    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:21.056436    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:21.073058    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:21.073155    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:21.088712    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:21.088795    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:21.101538    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:21.101614    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:21.112060    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:21.112128    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:21.122630    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:21.122722    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:21.133795    4114 logs.go:276] 0 containers: []
	W0926 18:05:21.133807    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:21.133873    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:21.144150    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:21.144168    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:21.144173    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:21.155560    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:21.155571    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:21.168602    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:21.168611    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:21.181392    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:21.181403    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:21.197142    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:21.197158    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:21.231802    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:21.231810    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:21.245496    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:21.245509    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:21.262427    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:21.262439    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:21.279351    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:21.279367    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:21.314348    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:21.314358    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:21.326295    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:21.326311    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:21.349912    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:21.349919    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:21.354032    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:21.354042    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:21.368161    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:21.368173    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:21.380788    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:21.380801    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:23.900892    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:24.712247    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:28.903423    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:28.903697    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:28.927423    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:28.927562    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:28.943098    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:28.943192    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:28.955714    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:28.955807    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:28.966602    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:28.966684    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:28.981527    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:28.981619    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:28.991744    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:28.991822    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:29.001917    4114 logs.go:276] 0 containers: []
	W0926 18:05:29.001932    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:29.002004    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:29.016438    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:29.016455    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:29.016461    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:29.027856    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:29.027867    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:29.064452    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:29.064461    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:29.076512    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:29.076527    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:29.088147    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:29.088161    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:29.103160    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:29.103169    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:29.115020    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:29.115031    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:29.129905    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:29.129915    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:29.140953    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:29.140964    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:29.153385    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:29.153397    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:29.157886    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:29.157891    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:29.193411    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:29.193423    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:29.208765    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:29.208778    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:29.223714    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:29.223726    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:29.242073    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:29.242083    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:29.714402    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:29.714659    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:29.736466    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:05:29.736565    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:29.751592    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:05:29.751691    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:29.764035    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:05:29.764120    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:29.774454    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:05:29.774542    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:29.785470    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:05:29.785558    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:29.803604    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:05:29.803686    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:29.814209    4572 logs.go:276] 0 containers: []
	W0926 18:05:29.814228    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:29.814303    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:29.824291    4572 logs.go:276] 0 containers: []
	W0926 18:05:29.824301    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:05:29.824311    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:05:29.824316    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:05:29.838412    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:05:29.838428    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:05:29.853287    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:29.853298    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:29.877106    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:05:29.877115    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:29.890283    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:29.890297    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:29.894558    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:05:29.894566    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:05:29.932865    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:05:29.932875    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:05:29.944811    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:29.944820    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:29.984721    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:29.984737    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:30.022443    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:05:30.022455    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:05:30.037781    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:05:30.037797    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:05:30.049946    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:05:30.049955    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:05:30.063453    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:05:30.063464    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:05:30.074291    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:05:30.074306    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:05:30.089309    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:05:30.089321    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:05:32.611915    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:31.767727    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:37.614028    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:37.614193    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:37.628793    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:05:37.628892    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:37.641282    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:05:37.641358    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:37.652847    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:05:37.652927    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:37.663528    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:05:37.663614    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:37.674489    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:05:37.674562    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:37.684861    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:05:37.684943    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:37.702825    4572 logs.go:276] 0 containers: []
	W0926 18:05:37.702837    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:37.702908    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:37.712800    4572 logs.go:276] 0 containers: []
	W0926 18:05:37.712814    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:05:37.712822    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:37.712828    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:37.751791    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:37.751825    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:37.756261    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:37.756270    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:37.790827    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:05:37.790843    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:05:37.804535    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:05:37.804545    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:05:37.815996    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:37.816007    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:37.838076    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:05:37.838084    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:05:37.853198    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:05:37.853212    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:05:37.870916    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:05:37.870930    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:37.884244    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:05:37.884261    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:05:37.921227    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:05:37.921241    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:05:37.935714    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:05:37.935728    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:05:37.947794    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:05:37.947808    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:05:37.959444    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:05:37.959461    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:05:37.973471    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:05:37.973486    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:05:36.769762    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:36.769986    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:36.787359    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:36.787466    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:36.801599    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:36.801690    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:36.813900    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:36.813994    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:36.824164    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:36.824247    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:36.835146    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:36.835232    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:36.845604    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:36.845680    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:36.864414    4114 logs.go:276] 0 containers: []
	W0926 18:05:36.864427    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:36.864499    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:36.874826    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:36.874845    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:36.874851    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:36.909972    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:36.909981    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:36.923946    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:36.923956    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:36.935540    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:36.935550    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:36.946908    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:36.946920    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:36.964028    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:36.964037    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:36.968550    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:36.968559    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:36.985704    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:36.985717    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:36.997671    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:36.997680    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:37.009303    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:37.009316    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:37.020928    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:37.020938    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:37.036937    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:37.036951    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:37.062146    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:37.062154    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:37.096781    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:37.096792    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:37.109123    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:37.109137    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:40.489219    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:39.625435    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:45.491583    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:45.491670    4572 kubeadm.go:597] duration metric: took 4m3.136801625s to restartPrimaryControlPlane
	W0926 18:05:45.491733    4572 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0926 18:05:45.491760    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0926 18:05:46.448027    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 18:05:46.452908    4572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 18:05:46.455756    4572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 18:05:46.458990    4572 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 18:05:46.458998    4572 kubeadm.go:157] found existing configuration files:
	
	I0926 18:05:46.459038    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0926 18:05:46.461415    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 18:05:46.461445    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 18:05:46.464214    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0926 18:05:46.467070    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 18:05:46.467098    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 18:05:46.469662    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0926 18:05:46.472244    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 18:05:46.472276    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 18:05:46.475297    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0926 18:05:46.477675    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 18:05:46.477701    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 18:05:46.480312    4572 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0926 18:05:46.497648    4572 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0926 18:05:46.497694    4572 kubeadm.go:310] [preflight] Running pre-flight checks
	I0926 18:05:46.555488    4572 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 18:05:46.555620    4572 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 18:05:46.555664    4572 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0926 18:05:46.605254    4572 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 18:05:46.609472    4572 out.go:235]   - Generating certificates and keys ...
	I0926 18:05:46.609507    4572 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0926 18:05:46.609562    4572 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0926 18:05:46.609607    4572 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0926 18:05:46.609737    4572 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0926 18:05:46.609815    4572 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0926 18:05:46.609845    4572 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0926 18:05:46.609885    4572 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0926 18:05:46.609915    4572 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0926 18:05:46.609949    4572 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0926 18:05:46.609985    4572 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0926 18:05:46.610005    4572 kubeadm.go:310] [certs] Using the existing "sa" key
	I0926 18:05:46.610030    4572 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 18:05:46.687430    4572 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 18:05:46.774785    4572 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 18:05:46.893289    4572 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 18:05:47.040080    4572 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 18:05:47.069356    4572 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 18:05:47.069884    4572 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 18:05:47.069932    4572 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0926 18:05:47.170283    4572 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 18:05:47.174502    4572 out.go:235]   - Booting up control plane ...
	I0926 18:05:47.174548    4572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 18:05:47.174591    4572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 18:05:47.174633    4572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 18:05:47.174672    4572 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 18:05:47.174757    4572 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0926 18:05:44.627919    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:44.628274    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:44.657028    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:44.657190    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:44.675333    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:44.675436    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:44.691501    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:44.691594    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:44.703094    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:44.703175    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:44.715065    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:44.715138    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:44.725567    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:44.725636    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:44.736025    4114 logs.go:276] 0 containers: []
	W0926 18:05:44.736038    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:44.736105    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:44.746751    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:44.746768    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:44.746774    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:44.758562    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:44.758572    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:44.770299    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:44.770309    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:44.781868    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:44.781878    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:44.793435    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:44.793449    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:44.818933    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:44.818951    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:44.854543    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:44.854554    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:44.869833    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:44.869849    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:44.881985    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:44.881997    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:44.918479    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:44.918493    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:44.933437    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:44.933452    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:44.948632    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:44.948646    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:44.968209    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:44.968226    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:44.972615    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:44.972623    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:44.991635    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:44.991651    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:47.504334    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:51.171572    4572 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001255 seconds
	I0926 18:05:51.171632    4572 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 18:05:51.175268    4572 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 18:05:51.684622    4572 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 18:05:51.684775    4572 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-211000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 18:05:52.188099    4572 kubeadm.go:310] [bootstrap-token] Using token: kpqn1y.znfhhlvfvuxxug59
	I0926 18:05:52.192102    4572 out.go:235]   - Configuring RBAC rules ...
	I0926 18:05:52.192154    4572 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 18:05:52.192205    4572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 18:05:52.194132    4572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 18:05:52.199969    4572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 18:05:52.200916    4572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 18:05:52.201650    4572 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 18:05:52.206330    4572 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 18:05:52.388590    4572 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0926 18:05:52.592759    4572 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0926 18:05:52.593333    4572 kubeadm.go:310] 
	I0926 18:05:52.593412    4572 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0926 18:05:52.593416    4572 kubeadm.go:310] 
	I0926 18:05:52.593556    4572 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0926 18:05:52.593574    4572 kubeadm.go:310] 
	I0926 18:05:52.593607    4572 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0926 18:05:52.593677    4572 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 18:05:52.593710    4572 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 18:05:52.593716    4572 kubeadm.go:310] 
	I0926 18:05:52.593781    4572 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0926 18:05:52.593786    4572 kubeadm.go:310] 
	I0926 18:05:52.593817    4572 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 18:05:52.593820    4572 kubeadm.go:310] 
	I0926 18:05:52.593919    4572 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0926 18:05:52.594003    4572 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 18:05:52.594058    4572 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 18:05:52.594061    4572 kubeadm.go:310] 
	I0926 18:05:52.594213    4572 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 18:05:52.594252    4572 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0926 18:05:52.594256    4572 kubeadm.go:310] 
	I0926 18:05:52.594312    4572 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kpqn1y.znfhhlvfvuxxug59 \
	I0926 18:05:52.594386    4572 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fda44b3178e2a9a18cad0c3f133cc2773c24b77ff2472c5e9e47121699490a5 \
	I0926 18:05:52.594401    4572 kubeadm.go:310] 	--control-plane 
	I0926 18:05:52.594403    4572 kubeadm.go:310] 
	I0926 18:05:52.594454    4572 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0926 18:05:52.594461    4572 kubeadm.go:310] 
	I0926 18:05:52.594506    4572 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kpqn1y.znfhhlvfvuxxug59 \
	I0926 18:05:52.594570    4572 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fda44b3178e2a9a18cad0c3f133cc2773c24b77ff2472c5e9e47121699490a5 
	I0926 18:05:52.594734    4572 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 18:05:52.594764    4572 cni.go:84] Creating CNI manager for ""
	I0926 18:05:52.594803    4572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:05:52.598645    4572 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 18:05:52.605787    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 18:05:52.608935    4572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 18:05:52.614777    4572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 18:05:52.614844    4572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 18:05:52.614903    4572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-211000 minikube.k8s.io/updated_at=2024_09_26T18_05_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=stopped-upgrade-211000 minikube.k8s.io/primary=true
	I0926 18:05:52.660032    4572 ops.go:34] apiserver oom_adj: -16
	I0926 18:05:52.660071    4572 kubeadm.go:1113] duration metric: took 45.290792ms to wait for elevateKubeSystemPrivileges
	I0926 18:05:52.660141    4572 kubeadm.go:394] duration metric: took 4m10.319511542s to StartCluster
	I0926 18:05:52.660153    4572 settings.go:142] acquiring lock: {Name:mk68436efc4e8fe170d744b4cebdb7ddef61f64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:05:52.660241    4572 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:05:52.660642    4572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/kubeconfig: {Name:mk9560fb3377d007cf139de436457ca7aa0f8d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:05:52.660829    4572 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:05:52.660849    4572 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 18:05:52.660937    4572 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-211000"
	I0926 18:05:52.660947    4572 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-211000"
	W0926 18:05:52.660952    4572 addons.go:243] addon storage-provisioner should already be in state true
	I0926 18:05:52.660963    4572 host.go:66] Checking if "stopped-upgrade-211000" exists ...
	I0926 18:05:52.661055    4572 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:05:52.661047    4572 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-211000"
	I0926 18:05:52.661102    4572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-211000"
	I0926 18:05:52.661294    4572 retry.go:31] will retry after 1.260194994s: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/monitor: connect: connection refused
	I0926 18:05:52.664742    4572 out.go:177] * Verifying Kubernetes components...
	I0926 18:05:52.671620    4572 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:05:52.677750    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:05:52.683812    4572 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 18:05:52.683820    4572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 18:05:52.683828    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0926 18:05:52.762538    4572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 18:05:52.768045    4572 api_server.go:52] waiting for apiserver process to appear ...
	I0926 18:05:52.768112    4572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:05:52.774036    4572 api_server.go:72] duration metric: took 113.199542ms to wait for apiserver process to appear ...
	I0926 18:05:52.774044    4572 api_server.go:88] waiting for apiserver healthz status ...
	I0926 18:05:52.774053    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:52.778429    4572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 18:05:52.506273    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:52.506390    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:52.520318    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:05:52.520400    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:52.531159    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:05:52.531254    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:52.543594    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:05:52.543682    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:52.555016    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:05:52.555107    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:52.565612    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:05:52.565696    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:52.577147    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:05:52.577223    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:52.588371    4114 logs.go:276] 0 containers: []
	W0926 18:05:52.588383    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:52.588463    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:52.600312    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:05:52.600329    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:52.600336    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:52.604875    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:05:52.604884    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:05:52.622991    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:05:52.623004    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:05:52.640065    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:52.640075    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:52.665353    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:52.665363    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:52.700676    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:52.700690    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:52.737929    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:05:52.737941    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:05:52.755831    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:05:52.755843    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:05:52.775177    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:05:52.775186    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:05:52.788275    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:05:52.788286    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:05:52.803113    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:05:52.803125    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:05:52.817782    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:05:52.817795    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:05:52.831022    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:05:52.831034    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:52.844208    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:05:52.844224    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:05:52.859777    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:05:52.859788    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:05:53.924448    4572 kapi.go:59] client config for stopped-upgrade-211000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060ce570), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 18:05:53.924592    4572 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-211000"
	W0926 18:05:53.924598    4572 addons.go:243] addon default-storageclass should already be in state true
	I0926 18:05:53.924611    4572 host.go:66] Checking if "stopped-upgrade-211000" exists ...
	I0926 18:05:53.925216    4572 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 18:05:53.925222    4572 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 18:05:53.925229    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0926 18:05:53.962129    4572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 18:05:54.031334    4572 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 18:05:54.031345    4572 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 18:05:57.775994    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:57.776114    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:55.384542    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:02.776759    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:02.776791    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:00.386550    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:00.386656    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:06:00.397882    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:06:00.397975    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:06:00.408419    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:06:00.408500    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:06:00.419173    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:06:00.419260    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:06:00.429594    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:06:00.429672    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:06:00.440670    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:06:00.440754    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:06:00.451317    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:06:00.451391    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:06:00.464335    4114 logs.go:276] 0 containers: []
	W0926 18:06:00.464346    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:06:00.464412    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:06:00.474971    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:06:00.474989    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:06:00.474995    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:06:00.513185    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:06:00.513205    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:06:00.528938    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:06:00.528949    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:06:00.542968    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:06:00.542978    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:06:00.554164    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:06:00.554174    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:06:00.566085    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:06:00.566095    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:06:00.577813    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:06:00.577829    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:06:00.582211    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:06:00.582219    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:06:00.617271    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:06:00.617286    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:06:00.628773    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:06:00.628785    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:06:00.640424    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:06:00.640434    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:06:00.659547    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:06:00.659556    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:06:00.683752    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:06:00.683761    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:06:00.699120    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:06:00.699132    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:06:00.716176    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:06:00.716187    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:06:03.229507    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:07.777071    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:07.777094    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:08.231501    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:08.231636    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:06:08.244820    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:06:08.244911    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:06:08.260574    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:06:08.260652    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:06:08.271310    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:06:08.271379    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:06:08.281877    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:06:08.281965    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:06:08.293681    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:06:08.293765    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:06:08.305050    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:06:08.305135    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:06:08.315466    4114 logs.go:276] 0 containers: []
	W0926 18:06:08.315479    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:06:08.315553    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:06:08.325618    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:06:08.325637    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:06:08.325642    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:06:08.361496    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:06:08.361505    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:06:08.365914    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:06:08.365920    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:06:08.377689    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:06:08.377703    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:06:08.402682    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:06:08.402695    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:06:08.415634    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:06:08.415646    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:06:08.433615    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:06:08.433627    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:06:08.445406    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:06:08.445417    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:06:08.459544    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:06:08.459554    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:06:08.471332    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:06:08.471344    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:06:08.506531    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:06:08.506543    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:06:08.526015    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:06:08.526027    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:06:08.539012    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:06:08.539023    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:06:08.553888    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:06:08.553899    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:06:08.566014    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:06:08.566024    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:06:12.777480    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:12.777519    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:11.085546    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:17.778131    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:17.778171    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:16.087660    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:16.087874    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:06:16.105326    4114 logs.go:276] 1 containers: [4e2743bd553f]
	I0926 18:06:16.105424    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:06:16.118341    4114 logs.go:276] 1 containers: [a76c6c0d7b4e]
	I0926 18:06:16.118424    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:06:16.129927    4114 logs.go:276] 4 containers: [5556a2b7412a 7f32edc07e38 d2033224d422 400b7e552d08]
	I0926 18:06:16.129998    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:06:16.144769    4114 logs.go:276] 1 containers: [257ae74b8541]
	I0926 18:06:16.144845    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:06:16.155452    4114 logs.go:276] 1 containers: [3bdef5c3a97f]
	I0926 18:06:16.155530    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:06:16.166380    4114 logs.go:276] 1 containers: [e87471d89654]
	I0926 18:06:16.166458    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:06:16.176466    4114 logs.go:276] 0 containers: []
	W0926 18:06:16.176476    4114 logs.go:278] No container was found matching "kindnet"
	I0926 18:06:16.176540    4114 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:06:16.186759    4114 logs.go:276] 1 containers: [37c276517b32]
	I0926 18:06:16.186774    4114 logs.go:123] Gathering logs for dmesg ...
	I0926 18:06:16.186780    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:06:16.191248    4114 logs.go:123] Gathering logs for kube-proxy [3bdef5c3a97f] ...
	I0926 18:06:16.191257    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bdef5c3a97f"
	I0926 18:06:16.203079    4114 logs.go:123] Gathering logs for Docker ...
	I0926 18:06:16.203090    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:06:16.226074    4114 logs.go:123] Gathering logs for kube-apiserver [4e2743bd553f] ...
	I0926 18:06:16.226081    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2743bd553f"
	I0926 18:06:16.244063    4114 logs.go:123] Gathering logs for coredns [5556a2b7412a] ...
	I0926 18:06:16.244072    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5556a2b7412a"
	I0926 18:06:16.255783    4114 logs.go:123] Gathering logs for coredns [400b7e552d08] ...
	I0926 18:06:16.255794    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 400b7e552d08"
	I0926 18:06:16.267621    4114 logs.go:123] Gathering logs for kube-scheduler [257ae74b8541] ...
	I0926 18:06:16.267632    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 257ae74b8541"
	I0926 18:06:16.282628    4114 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:06:16.282638    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:06:16.317225    4114 logs.go:123] Gathering logs for coredns [7f32edc07e38] ...
	I0926 18:06:16.317240    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f32edc07e38"
	I0926 18:06:16.329093    4114 logs.go:123] Gathering logs for container status ...
	I0926 18:06:16.329103    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:06:16.341627    4114 logs.go:123] Gathering logs for storage-provisioner [37c276517b32] ...
	I0926 18:06:16.341636    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37c276517b32"
	I0926 18:06:16.352821    4114 logs.go:123] Gathering logs for kubelet ...
	I0926 18:06:16.352832    4114 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:06:16.387521    4114 logs.go:123] Gathering logs for etcd [a76c6c0d7b4e] ...
	I0926 18:06:16.387528    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a76c6c0d7b4e"
	I0926 18:06:16.401682    4114 logs.go:123] Gathering logs for coredns [d2033224d422] ...
	I0926 18:06:16.401698    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2033224d422"
	I0926 18:06:16.419452    4114 logs.go:123] Gathering logs for kube-controller-manager [e87471d89654] ...
	I0926 18:06:16.419463    4114 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e87471d89654"
	I0926 18:06:18.938669    4114 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:22.779007    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:22.779054    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:23.940048    4114 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:23.944711    4114 out.go:201] 
	W0926 18:06:23.947524    4114 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0926 18:06:23.947535    4114 out.go:270] * 
	W0926 18:06:23.948126    4114 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:06:23.959515    4114 out.go:201] 
	W0926 18:06:24.031687    4572 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0926 18:06:24.035909    4572 out.go:177] * Enabled addons: storage-provisioner
	I0926 18:06:24.043899    4572 addons.go:510] duration metric: took 31.384724125s for enable addons: enabled=[storage-provisioner]
	I0926 18:06:27.779133    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:27.779184    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:32.780459    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:32.780520    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:37.782198    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:37.782238    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-09-27 00:57:32 UTC, ends at Fri 2024-09-27 01:06:40 UTC. --
	Sep 27 01:06:24 running-upgrade-937000 dockerd[3140]: time="2024-09-27T01:06:24.955852813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 27 01:06:24 running-upgrade-937000 dockerd[3140]: time="2024-09-27T01:06:24.955892185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 27 01:06:24 running-upgrade-937000 dockerd[3140]: time="2024-09-27T01:06:24.955898018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 27 01:06:24 running-upgrade-937000 dockerd[3140]: time="2024-09-27T01:06:24.957288103Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e51e5b24e5b419f8acb6ad40978455de64d34a14503d17d1ecd506f04a8cbeff pid=18798 runtime=io.containerd.runc.v2
	Sep 27 01:06:25 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:25Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 27 01:06:25 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:25Z" level=error msg="ContainerStats resp: {0x40006e3300 linux}"
	Sep 27 01:06:26 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:26Z" level=error msg="ContainerStats resp: {0x40007bb4c0 linux}"
	Sep 27 01:06:26 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:26Z" level=error msg="ContainerStats resp: {0x40008d99c0 linux}"
	Sep 27 01:06:26 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:26Z" level=error msg="ContainerStats resp: {0x40008d9b00 linux}"
	Sep 27 01:06:26 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:26Z" level=error msg="ContainerStats resp: {0x40009d8000 linux}"
	Sep 27 01:06:26 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:26Z" level=error msg="ContainerStats resp: {0x4000a0c380 linux}"
	Sep 27 01:06:26 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:26Z" level=error msg="ContainerStats resp: {0x4000a0c500 linux}"
	Sep 27 01:06:26 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:26Z" level=error msg="ContainerStats resp: {0x40009d8040 linux}"
	Sep 27 01:06:30 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:30Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 27 01:06:35 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:35Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 27 01:06:36 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:36Z" level=error msg="ContainerStats resp: {0x40008d8f80 linux}"
	Sep 27 01:06:36 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:36Z" level=error msg="ContainerStats resp: {0x40008d9e00 linux}"
	Sep 27 01:06:37 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:37Z" level=error msg="ContainerStats resp: {0x40008d9200 linux}"
	Sep 27 01:06:38 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:38Z" level=error msg="ContainerStats resp: {0x400083ab80 linux}"
	Sep 27 01:06:38 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:38Z" level=error msg="ContainerStats resp: {0x40006e2c00 linux}"
	Sep 27 01:06:38 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:38Z" level=error msg="ContainerStats resp: {0x400083ad40 linux}"
	Sep 27 01:06:38 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:38Z" level=error msg="ContainerStats resp: {0x40006e3a40 linux}"
	Sep 27 01:06:38 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:38Z" level=error msg="ContainerStats resp: {0x40006e3c00 linux}"
	Sep 27 01:06:38 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:38Z" level=error msg="ContainerStats resp: {0x4000356080 linux}"
	Sep 27 01:06:38 running-upgrade-937000 cri-dockerd[2981]: time="2024-09-27T01:06:38Z" level=error msg="ContainerStats resp: {0x4000639000 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	e51e5b24e5b41       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   5f99bc0db3ae3
	c178cb110cf24       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   da4120c4a5475
	5556a2b7412a1       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5f99bc0db3ae3
	7f32edc07e38d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   da4120c4a5475
	37c276517b327       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   26f386990cb48
	3bdef5c3a97f9       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   ad4b028ecd9b9
	a76c6c0d7b4e0       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   dd6676b85702c
	e87471d896547       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   e7a514173d0e1
	257ae74b8541f       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   7a9140e67af90
	4e2743bd553f5       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   99a60ebe8b626
	
	
	==> coredns [5556a2b7412a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5193760789030259512.932542470193350011. HINFO: read udp 10.244.0.3:45427->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5193760789030259512.932542470193350011. HINFO: read udp 10.244.0.3:40293->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5193760789030259512.932542470193350011. HINFO: read udp 10.244.0.3:59878->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5193760789030259512.932542470193350011. HINFO: read udp 10.244.0.3:58541->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5193760789030259512.932542470193350011. HINFO: read udp 10.244.0.3:55286->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5193760789030259512.932542470193350011. HINFO: read udp 10.244.0.3:58090->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5193760789030259512.932542470193350011. HINFO: read udp 10.244.0.3:55093->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5193760789030259512.932542470193350011. HINFO: read udp 10.244.0.3:45490->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5193760789030259512.932542470193350011. HINFO: read udp 10.244.0.3:52234->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5193760789030259512.932542470193350011. HINFO: read udp 10.244.0.3:38632->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7f32edc07e38] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2652780565596401142.40426602910971943. HINFO: read udp 10.244.0.2:55215->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2652780565596401142.40426602910971943. HINFO: read udp 10.244.0.2:47813->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2652780565596401142.40426602910971943. HINFO: read udp 10.244.0.2:53441->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2652780565596401142.40426602910971943. HINFO: read udp 10.244.0.2:48188->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2652780565596401142.40426602910971943. HINFO: read udp 10.244.0.2:40765->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2652780565596401142.40426602910971943. HINFO: read udp 10.244.0.2:38640->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2652780565596401142.40426602910971943. HINFO: read udp 10.244.0.2:58853->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2652780565596401142.40426602910971943. HINFO: read udp 10.244.0.2:59523->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2652780565596401142.40426602910971943. HINFO: read udp 10.244.0.2:57401->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2652780565596401142.40426602910971943. HINFO: read udp 10.244.0.2:57753->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c178cb110cf2] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2404944701034747929.5458462714552834254. HINFO: read udp 10.244.0.2:44117->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2404944701034747929.5458462714552834254. HINFO: read udp 10.244.0.2:52918->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2404944701034747929.5458462714552834254. HINFO: read udp 10.244.0.2:45845->10.0.2.3:53: i/o timeout
	
	
	==> coredns [e51e5b24e5b4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8529947464021059687.4814212141328574208. HINFO: read udp 10.244.0.3:53484->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8529947464021059687.4814212141328574208. HINFO: read udp 10.244.0.3:33996->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8529947464021059687.4814212141328574208. HINFO: read udp 10.244.0.3:44837->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-937000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-937000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=running-upgrade-937000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_26T18_02_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:02:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-937000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:06:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:02:23 +0000   Fri, 27 Sep 2024 01:02:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:02:23 +0000   Fri, 27 Sep 2024 01:02:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:02:23 +0000   Fri, 27 Sep 2024 01:02:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:02:23 +0000   Fri, 27 Sep 2024 01:02:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-937000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 24ac149f5c1948fba3a40c87199e45c2
	  System UUID:                24ac149f5c1948fba3a40c87199e45c2
	  Boot ID:                    91f86cdd-1c22-49a7-a838-505c0ff093f8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-kqvlb                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-rjpns                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-937000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-937000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-937000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-4thjf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-937000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-937000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-937000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-937000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-937000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-937000 event: Registered Node running-upgrade-937000 in Controller
	
	
	==> dmesg <==
	[  +1.622399] systemd-fstab-generator[870]: Ignoring "noauto" for root device
	[  +0.083129] systemd-fstab-generator[881]: Ignoring "noauto" for root device
	[  +0.078050] systemd-fstab-generator[892]: Ignoring "noauto" for root device
	[  +1.136590] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.090431] systemd-fstab-generator[1042]: Ignoring "noauto" for root device
	[  +0.072129] systemd-fstab-generator[1053]: Ignoring "noauto" for root device
	[  +2.137509] systemd-fstab-generator[1283]: Ignoring "noauto" for root device
	[  +9.143950] systemd-fstab-generator[1918]: Ignoring "noauto" for root device
	[  +2.904751] systemd-fstab-generator[2198]: Ignoring "noauto" for root device
	[  +0.152974] systemd-fstab-generator[2233]: Ignoring "noauto" for root device
	[  +0.091523] systemd-fstab-generator[2244]: Ignoring "noauto" for root device
	[Sep27 00:58] systemd-fstab-generator[2257]: Ignoring "noauto" for root device
	[  +3.243285] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.187030] systemd-fstab-generator[2938]: Ignoring "noauto" for root device
	[  +0.083560] systemd-fstab-generator[2949]: Ignoring "noauto" for root device
	[  +0.079447] systemd-fstab-generator[2960]: Ignoring "noauto" for root device
	[  +0.090646] systemd-fstab-generator[2974]: Ignoring "noauto" for root device
	[  +2.305593] systemd-fstab-generator[3127]: Ignoring "noauto" for root device
	[  +3.641174] systemd-fstab-generator[3548]: Ignoring "noauto" for root device
	[  +1.422515] systemd-fstab-generator[3843]: Ignoring "noauto" for root device
	[ +19.929678] kauditd_printk_skb: 68 callbacks suppressed
	[Sep27 00:59] kauditd_printk_skb: 21 callbacks suppressed
	[Sep27 01:02] systemd-fstab-generator[11851]: Ignoring "noauto" for root device
	[  +5.634702] systemd-fstab-generator[12443]: Ignoring "noauto" for root device
	[  +0.464378] systemd-fstab-generator[12580]: Ignoring "noauto" for root device
	
	
	==> etcd [a76c6c0d7b4e] <==
	{"level":"info","ts":"2024-09-27T01:02:18.814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-27T01:02:18.814Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-27T01:02:18.834Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T01:02:18.841Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T01:02:18.841Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T01:02:18.841Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-27T01:02:18.841Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-27T01:02:19.587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-27T01:02:19.587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-27T01:02:19.587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-27T01:02:19.587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-27T01:02:19.587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-27T01:02:19.587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-27T01:02:19.587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-27T01:02:19.588Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-937000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T01:02:19.588Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:02:19.588Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T01:02:19.588Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T01:02:19.588Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T01:02:19.589Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T01:02:19.588Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:02:19.589Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:02:19.589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:02:19.589Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T01:02:19.588Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 01:06:40 up 9 min,  0 users,  load average: 0.19, 0.26, 0.14
	Linux running-upgrade-937000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [4e2743bd553f] <==
	I0927 01:02:20.756018       1 controller.go:611] quota admission added evaluator for: namespaces
	I0927 01:02:20.799329       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 01:02:20.801544       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0927 01:02:20.801582       1 cache.go:39] Caches are synced for autoregister controller
	I0927 01:02:20.801686       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0927 01:02:20.806740       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 01:02:20.814144       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0927 01:02:20.814201       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0927 01:02:21.557899       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0927 01:02:21.708695       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0927 01:02:21.714181       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0927 01:02:21.714216       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 01:02:21.861845       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 01:02:21.875712       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 01:02:21.960227       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0927 01:02:21.962395       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0927 01:02:21.962775       1 controller.go:611] quota admission added evaluator for: endpoints
	I0927 01:02:21.963932       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 01:02:22.847760       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0927 01:02:23.127440       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0927 01:02:23.132477       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0927 01:02:23.136873       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0927 01:02:36.753769       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0927 01:02:36.799792       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0927 01:02:37.611434       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [e87471d89654] <==
	I0927 01:02:36.670257       1 shared_informer.go:262] Caches are synced for crt configmap
	I0927 01:02:36.749420       1 shared_informer.go:262] Caches are synced for daemon sets
	I0927 01:02:36.756233       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4thjf"
	I0927 01:02:36.768541       1 shared_informer.go:262] Caches are synced for taint
	I0927 01:02:36.768699       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0927 01:02:36.768825       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0927 01:02:36.768899       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-937000. Assuming now as a timestamp.
	I0927 01:02:36.768955       1 event.go:294] "Event occurred" object="running-upgrade-937000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-937000 event: Registered Node running-upgrade-937000 in Controller"
	I0927 01:02:36.769449       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0927 01:02:36.794940       1 shared_informer.go:262] Caches are synced for disruption
	I0927 01:02:36.794969       1 disruption.go:371] Sending events to api server.
	I0927 01:02:36.796022       1 shared_informer.go:262] Caches are synced for deployment
	I0927 01:02:36.796053       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0927 01:02:36.801268       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0927 01:02:36.807167       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-rjpns"
	I0927 01:02:36.814007       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-kqvlb"
	I0927 01:02:36.845716       1 shared_informer.go:262] Caches are synced for expand
	I0927 01:02:36.853211       1 shared_informer.go:262] Caches are synced for resource quota
	I0927 01:02:36.859521       1 shared_informer.go:262] Caches are synced for PV protection
	I0927 01:02:36.862800       1 shared_informer.go:262] Caches are synced for persistent volume
	I0927 01:02:36.864898       1 shared_informer.go:262] Caches are synced for attach detach
	I0927 01:02:36.872573       1 shared_informer.go:262] Caches are synced for resource quota
	I0927 01:02:37.290659       1 shared_informer.go:262] Caches are synced for garbage collector
	I0927 01:02:37.346164       1 shared_informer.go:262] Caches are synced for garbage collector
	I0927 01:02:37.346175       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [3bdef5c3a97f] <==
	I0927 01:02:37.600150       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0927 01:02:37.600179       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0927 01:02:37.600190       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0927 01:02:37.609199       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0927 01:02:37.609210       1 server_others.go:206] "Using iptables Proxier"
	I0927 01:02:37.609225       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0927 01:02:37.609314       1 server.go:661] "Version info" version="v1.24.1"
	I0927 01:02:37.609319       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 01:02:37.609672       1 config.go:317] "Starting service config controller"
	I0927 01:02:37.609678       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0927 01:02:37.609686       1 config.go:226] "Starting endpoint slice config controller"
	I0927 01:02:37.609688       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0927 01:02:37.610543       1 config.go:444] "Starting node config controller"
	I0927 01:02:37.610570       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0927 01:02:37.713054       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0927 01:02:37.713054       1 shared_informer.go:262] Caches are synced for node config
	I0927 01:02:37.713078       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [257ae74b8541] <==
	W0927 01:02:20.750963       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 01:02:20.750966       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0927 01:02:20.750977       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 01:02:20.750979       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0927 01:02:20.750990       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 01:02:20.750992       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0927 01:02:20.751003       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 01:02:20.751006       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0927 01:02:20.751025       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 01:02:20.751028       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0927 01:02:20.751288       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 01:02:20.751293       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0927 01:02:20.754115       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 01:02:20.754144       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 01:02:21.626216       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 01:02:21.626294       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0927 01:02:21.714676       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 01:02:21.714733       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0927 01:02:21.777020       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 01:02:21.777041       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0927 01:02:21.808676       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 01:02:21.808819       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0927 01:02:21.814118       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 01:02:21.814178       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0927 01:02:21.946148       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-09-27 00:57:32 UTC, ends at Fri 2024-09-27 01:06:40 UTC. --
	Sep 27 01:02:24 running-upgrade-937000 kubelet[12454]: I0927 01:02:24.165782   12454 apiserver.go:52] "Watching apiserver"
	Sep 27 01:02:24 running-upgrade-937000 kubelet[12454]: I0927 01:02:24.595437   12454 reconciler.go:157] "Reconciler: start to sync state"
	Sep 27 01:02:24 running-upgrade-937000 kubelet[12454]: E0927 01:02:24.764858   12454 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-937000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-937000"
	Sep 27 01:02:24 running-upgrade-937000 kubelet[12454]: E0927 01:02:24.964382   12454 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-937000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-937000"
	Sep 27 01:02:25 running-upgrade-937000 kubelet[12454]: E0927 01:02:25.163296   12454 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-937000\" already exists" pod="kube-system/etcd-running-upgrade-937000"
	Sep 27 01:02:25 running-upgrade-937000 kubelet[12454]: I0927 01:02:25.361312   12454 request.go:601] Waited for 1.139323967s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 27 01:02:25 running-upgrade-937000 kubelet[12454]: E0927 01:02:25.363611   12454 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-937000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-937000"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.687189   12454 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.687599   12454 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.758977   12454 topology_manager.go:200] "Topology Admit Handler"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.773153   12454 topology_manager.go:200] "Topology Admit Handler"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.812576   12454 topology_manager.go:200] "Topology Admit Handler"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.819163   12454 topology_manager.go:200] "Topology Admit Handler"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.888278   12454 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f5148bd-d18e-4104-9912-9eba23d1329c-xtables-lock\") pod \"kube-proxy-4thjf\" (UID: \"2f5148bd-d18e-4104-9912-9eba23d1329c\") " pod="kube-system/kube-proxy-4thjf"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.888304   12454 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f5148bd-d18e-4104-9912-9eba23d1329c-kube-proxy\") pod \"kube-proxy-4thjf\" (UID: \"2f5148bd-d18e-4104-9912-9eba23d1329c\") " pod="kube-system/kube-proxy-4thjf"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.888316   12454 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4hd\" (UniqueName: \"kubernetes.io/projected/7449344d-95ac-410d-ae5d-71d474bbd634-kube-api-access-rh4hd\") pod \"storage-provisioner\" (UID: \"7449344d-95ac-410d-ae5d-71d474bbd634\") " pod="kube-system/storage-provisioner"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.888326   12454 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6p82\" (UniqueName: \"kubernetes.io/projected/2f5148bd-d18e-4104-9912-9eba23d1329c-kube-api-access-n6p82\") pod \"kube-proxy-4thjf\" (UID: \"2f5148bd-d18e-4104-9912-9eba23d1329c\") " pod="kube-system/kube-proxy-4thjf"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.888336   12454 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7449344d-95ac-410d-ae5d-71d474bbd634-tmp\") pod \"storage-provisioner\" (UID: \"7449344d-95ac-410d-ae5d-71d474bbd634\") " pod="kube-system/storage-provisioner"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.888345   12454 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f667e4d-6bb3-44d1-9f40-5ac3c71834b0-config-volume\") pod \"coredns-6d4b75cb6d-rjpns\" (UID: \"9f667e4d-6bb3-44d1-9f40-5ac3c71834b0\") " pod="kube-system/coredns-6d4b75cb6d-rjpns"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.888354   12454 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f5148bd-d18e-4104-9912-9eba23d1329c-lib-modules\") pod \"kube-proxy-4thjf\" (UID: \"2f5148bd-d18e-4104-9912-9eba23d1329c\") " pod="kube-system/kube-proxy-4thjf"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.988830   12454 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f4aa54d-a159-41e6-af35-95d967979a1e-config-volume\") pod \"coredns-6d4b75cb6d-kqvlb\" (UID: \"5f4aa54d-a159-41e6-af35-95d967979a1e\") " pod="kube-system/coredns-6d4b75cb6d-kqvlb"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.990558   12454 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp5gx\" (UniqueName: \"kubernetes.io/projected/5f4aa54d-a159-41e6-af35-95d967979a1e-kube-api-access-sp5gx\") pod \"coredns-6d4b75cb6d-kqvlb\" (UID: \"5f4aa54d-a159-41e6-af35-95d967979a1e\") " pod="kube-system/coredns-6d4b75cb6d-kqvlb"
	Sep 27 01:02:36 running-upgrade-937000 kubelet[12454]: I0927 01:02:36.991683   12454 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb7dr\" (UniqueName: \"kubernetes.io/projected/9f667e4d-6bb3-44d1-9f40-5ac3c71834b0-kube-api-access-zb7dr\") pod \"coredns-6d4b75cb6d-rjpns\" (UID: \"9f667e4d-6bb3-44d1-9f40-5ac3c71834b0\") " pod="kube-system/coredns-6d4b75cb6d-rjpns"
	Sep 27 01:06:25 running-upgrade-937000 kubelet[12454]: I0927 01:06:25.716730   12454 scope.go:110] "RemoveContainer" containerID="d2033224d42269f53b41dda52bceab14dc0ab4d320ef62ba9e34fff2bb57f1bc"
	Sep 27 01:06:25 running-upgrade-937000 kubelet[12454]: I0927 01:06:25.735475   12454 scope.go:110] "RemoveContainer" containerID="400b7e552d08d1664838a905721139c78e2bf352d0d1c8ad8a992c83d39e06ec"
	
	
	==> storage-provisioner [37c276517b32] <==
	I0927 01:02:37.585879       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 01:02:37.590673       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 01:02:37.590692       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 01:02:37.594791       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 01:02:37.594923       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-937000_5bc5c7b6-58a2-4093-a634-962cfa7e0b20!
	I0927 01:02:37.595252       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4a2e95ff-412a-4da4-90c9-8c3db0e3b5e7", APIVersion:"v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-937000_5bc5c7b6-58a2-4093-a634-962cfa7e0b20 became leader
	I0927 01:02:37.694981       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-937000_5bc5c7b6-58a2-4093-a634-962cfa7e0b20!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-937000 -n running-upgrade-937000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-937000 -n running-upgrade-937000: exit status 2 (15.675779166s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-937000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-937000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-937000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-937000: (1.116718833s)
--- FAIL: TestRunningBinaryUpgrade (596.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.76s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-708000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-708000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.010908166s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-708000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-708000" primary control-plane node in "kubernetes-upgrade-708000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-708000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:00:01.225258    4322 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:00:01.225393    4322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:00:01.225397    4322 out.go:358] Setting ErrFile to fd 2...
	I0926 18:00:01.225399    4322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:00:01.225552    4322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:00:01.226876    4322 out.go:352] Setting JSON to false
	I0926 18:00:01.244898    4322 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3564,"bootTime":1727395237,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:00:01.244969    4322 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:00:01.249575    4322 out.go:177] * [kubernetes-upgrade-708000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:00:01.257857    4322 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:00:01.257890    4322 notify.go:220] Checking for updates...
	I0926 18:00:01.263809    4322 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:00:01.266843    4322 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:00:01.270794    4322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:00:01.273841    4322 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:00:01.276885    4322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:00:01.280196    4322 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:00:01.280264    4322 config.go:182] Loaded profile config "running-upgrade-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:00:01.280302    4322 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:00:01.283906    4322 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:00:01.289771    4322 start.go:297] selected driver: qemu2
	I0926 18:00:01.289778    4322 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:00:01.289788    4322 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:00:01.292206    4322 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:00:01.294803    4322 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:00:01.297893    4322 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 18:00:01.297905    4322 cni.go:84] Creating CNI manager for ""
	I0926 18:00:01.297924    4322 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0926 18:00:01.297946    4322 start.go:340] cluster config:
	{Name:kubernetes-upgrade-708000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:00:01.301623    4322 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:00:01.304836    4322 out.go:177] * Starting "kubernetes-upgrade-708000" primary control-plane node in "kubernetes-upgrade-708000" cluster
	I0926 18:00:01.313055    4322 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 18:00:01.313085    4322 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0926 18:00:01.313093    4322 cache.go:56] Caching tarball of preloaded images
	I0926 18:00:01.313180    4322 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:00:01.313186    4322 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0926 18:00:01.313243    4322 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/kubernetes-upgrade-708000/config.json ...
	I0926 18:00:01.313254    4322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/kubernetes-upgrade-708000/config.json: {Name:mk00e7953f2e98782898b8cc07a16d9432f06940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:00:01.313542    4322 start.go:360] acquireMachinesLock for kubernetes-upgrade-708000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:00:01.313590    4322 start.go:364] duration metric: took 39µs to acquireMachinesLock for "kubernetes-upgrade-708000"
	I0926 18:00:01.313605    4322 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:00:01.313641    4322 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:00:01.316886    4322 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:00:01.332959    4322 start.go:159] libmachine.API.Create for "kubernetes-upgrade-708000" (driver="qemu2")
	I0926 18:00:01.332988    4322 client.go:168] LocalClient.Create starting
	I0926 18:00:01.333059    4322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:00:01.333092    4322 main.go:141] libmachine: Decoding PEM data...
	I0926 18:00:01.333101    4322 main.go:141] libmachine: Parsing certificate...
	I0926 18:00:01.333140    4322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:00:01.333164    4322 main.go:141] libmachine: Decoding PEM data...
	I0926 18:00:01.333172    4322 main.go:141] libmachine: Parsing certificate...
	I0926 18:00:01.333585    4322 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:00:01.549718    4322 main.go:141] libmachine: Creating SSH key...
	I0926 18:00:01.738528    4322 main.go:141] libmachine: Creating Disk image...
	I0926 18:00:01.738546    4322 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:00:01.738745    4322 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2
	I0926 18:00:01.748091    4322 main.go:141] libmachine: STDOUT: 
	I0926 18:00:01.748116    4322 main.go:141] libmachine: STDERR: 
	I0926 18:00:01.748175    4322 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2 +20000M
	I0926 18:00:01.756090    4322 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:00:01.756114    4322 main.go:141] libmachine: STDERR: 
	I0926 18:00:01.756137    4322 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2
	I0926 18:00:01.756142    4322 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:00:01.756158    4322 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:00:01.756198    4322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:6a:b3:d7:07:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2
	I0926 18:00:01.757826    4322 main.go:141] libmachine: STDOUT: 
	I0926 18:00:01.757839    4322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:00:01.757859    4322 client.go:171] duration metric: took 424.87675ms to LocalClient.Create
	I0926 18:00:03.759963    4322 start.go:128] duration metric: took 2.446369375s to createHost
	I0926 18:00:03.760045    4322 start.go:83] releasing machines lock for "kubernetes-upgrade-708000", held for 2.446514292s
	W0926 18:00:03.760105    4322 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:00:03.770653    4322 out.go:177] * Deleting "kubernetes-upgrade-708000" in qemu2 ...
	W0926 18:00:03.801748    4322 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:00:03.801775    4322 start.go:729] Will try again in 5 seconds ...
	I0926 18:00:08.803880    4322 start.go:360] acquireMachinesLock for kubernetes-upgrade-708000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:00:08.804514    4322 start.go:364] duration metric: took 512.084µs to acquireMachinesLock for "kubernetes-upgrade-708000"
	I0926 18:00:08.804684    4322 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:00:08.805011    4322 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:00:08.817795    4322 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:00:08.860297    4322 start.go:159] libmachine.API.Create for "kubernetes-upgrade-708000" (driver="qemu2")
	I0926 18:00:08.860339    4322 client.go:168] LocalClient.Create starting
	I0926 18:00:08.860434    4322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:00:08.860501    4322 main.go:141] libmachine: Decoding PEM data...
	I0926 18:00:08.860517    4322 main.go:141] libmachine: Parsing certificate...
	I0926 18:00:08.860565    4322 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:00:08.860604    4322 main.go:141] libmachine: Decoding PEM data...
	I0926 18:00:08.860619    4322 main.go:141] libmachine: Parsing certificate...
	I0926 18:00:08.861049    4322 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:00:09.040947    4322 main.go:141] libmachine: Creating SSH key...
	I0926 18:00:09.140339    4322 main.go:141] libmachine: Creating Disk image...
	I0926 18:00:09.140347    4322 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:00:09.140588    4322 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2
	I0926 18:00:09.151298    4322 main.go:141] libmachine: STDOUT: 
	I0926 18:00:09.151318    4322 main.go:141] libmachine: STDERR: 
	I0926 18:00:09.151387    4322 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2 +20000M
	I0926 18:00:09.159953    4322 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:00:09.159970    4322 main.go:141] libmachine: STDERR: 
	I0926 18:00:09.159984    4322 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2
	I0926 18:00:09.159988    4322 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:00:09.159999    4322 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:00:09.160033    4322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:26:cd:ea:d3:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2
	I0926 18:00:09.161878    4322 main.go:141] libmachine: STDOUT: 
	I0926 18:00:09.161903    4322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:00:09.161923    4322 client.go:171] duration metric: took 301.585667ms to LocalClient.Create
	I0926 18:00:11.164095    4322 start.go:128] duration metric: took 2.359098792s to createHost
	I0926 18:00:11.164200    4322 start.go:83] releasing machines lock for "kubernetes-upgrade-708000", held for 2.35972575s
	W0926 18:00:11.164549    4322 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:00:11.177631    4322 out.go:201] 
	W0926 18:00:11.181674    4322 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:00:11.181724    4322 out.go:270] * 
	* 
	W0926 18:00:11.184709    4322 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:00:11.193503    4322 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-708000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-708000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-708000: (3.336119833s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-708000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-708000 status --format={{.Host}}: exit status 7 (59.060708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-708000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-708000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.177341459s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-708000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-708000" primary control-plane node in "kubernetes-upgrade-708000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-708000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-708000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:00:14.633083    4529 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:00:14.633262    4529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:00:14.633265    4529 out.go:358] Setting ErrFile to fd 2...
	I0926 18:00:14.633267    4529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:00:14.633392    4529 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:00:14.634304    4529 out.go:352] Setting JSON to false
	I0926 18:00:14.650461    4529 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3577,"bootTime":1727395237,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:00:14.650533    4529 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:00:14.655481    4529 out.go:177] * [kubernetes-upgrade-708000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:00:14.662601    4529 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:00:14.662642    4529 notify.go:220] Checking for updates...
	I0926 18:00:14.669445    4529 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:00:14.672506    4529 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:00:14.675457    4529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:00:14.678487    4529 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:00:14.681524    4529 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:00:14.684815    4529 config.go:182] Loaded profile config "kubernetes-upgrade-708000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0926 18:00:14.685058    4529 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:00:14.689490    4529 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 18:00:14.696395    4529 start.go:297] selected driver: qemu2
	I0926 18:00:14.696400    4529 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:00:14.696449    4529 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:00:14.698610    4529 cni.go:84] Creating CNI manager for ""
	I0926 18:00:14.698638    4529 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:00:14.698659    4529 start.go:340] cluster config:
	{Name:kubernetes-upgrade-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-708000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:00:14.701880    4529 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:00:14.706459    4529 out.go:177] * Starting "kubernetes-upgrade-708000" primary control-plane node in "kubernetes-upgrade-708000" cluster
	I0926 18:00:14.713417    4529 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:00:14.713433    4529 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:00:14.713441    4529 cache.go:56] Caching tarball of preloaded images
	I0926 18:00:14.713488    4529 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:00:14.713493    4529 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:00:14.713538    4529 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/kubernetes-upgrade-708000/config.json ...
	I0926 18:00:14.714026    4529 start.go:360] acquireMachinesLock for kubernetes-upgrade-708000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:00:14.714055    4529 start.go:364] duration metric: took 22.542µs to acquireMachinesLock for "kubernetes-upgrade-708000"
	I0926 18:00:14.714063    4529 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:00:14.714069    4529 fix.go:54] fixHost starting: 
	I0926 18:00:14.714196    4529 fix.go:112] recreateIfNeeded on kubernetes-upgrade-708000: state=Stopped err=<nil>
	W0926 18:00:14.714205    4529 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:00:14.719453    4529 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-708000" ...
	I0926 18:00:14.723523    4529 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:00:14.723565    4529 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:26:cd:ea:d3:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2
	I0926 18:00:14.725377    4529 main.go:141] libmachine: STDOUT: 
	I0926 18:00:14.725390    4529 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:00:14.725415    4529 fix.go:56] duration metric: took 11.347333ms for fixHost
	I0926 18:00:14.725420    4529 start.go:83] releasing machines lock for "kubernetes-upgrade-708000", held for 11.361542ms
	W0926 18:00:14.725426    4529 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:00:14.725452    4529 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:00:14.725456    4529 start.go:729] Will try again in 5 seconds ...
	I0926 18:00:19.727591    4529 start.go:360] acquireMachinesLock for kubernetes-upgrade-708000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:00:19.727999    4529 start.go:364] duration metric: took 328.291µs to acquireMachinesLock for "kubernetes-upgrade-708000"
	I0926 18:00:19.728059    4529 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:00:19.728079    4529 fix.go:54] fixHost starting: 
	I0926 18:00:19.728640    4529 fix.go:112] recreateIfNeeded on kubernetes-upgrade-708000: state=Stopped err=<nil>
	W0926 18:00:19.728662    4529 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:00:19.733361    4529 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-708000" ...
	I0926 18:00:19.737041    4529 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:00:19.737335    4529 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:26:cd:ea:d3:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubernetes-upgrade-708000/disk.qcow2
	I0926 18:00:19.745490    4529 main.go:141] libmachine: STDOUT: 
	I0926 18:00:19.745545    4529 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:00:19.745627    4529 fix.go:56] duration metric: took 17.549916ms for fixHost
	I0926 18:00:19.745648    4529 start.go:83] releasing machines lock for "kubernetes-upgrade-708000", held for 17.630125ms
	W0926 18:00:19.745813    4529 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-708000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-708000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:00:19.754006    4529 out.go:201] 
	W0926 18:00:19.756988    4529 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:00:19.757011    4529 out.go:270] * 
	* 
	W0926 18:00:19.758539    4529 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:00:19.768921    4529 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-708000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-708000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-708000 version --output=json: exit status 1 (60.464458ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-708000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-26 18:00:19.842988 -0700 PDT m=+2789.526850959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-708000 -n kubernetes-upgrade-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-708000 -n kubernetes-upgrade-708000: exit status 7 (32.966084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-708000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-708000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-708000
--- FAIL: TestKubernetesUpgrade (18.76s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.46s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19711
- KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1314156627/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.1s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19711
- KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current380973286/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (576.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3284124500 start -p stopped-upgrade-211000 --memory=2200 --vm-driver=qemu2 
E0926 18:00:48.268642    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3284124500 start -p stopped-upgrade-211000 --memory=2200 --vm-driver=qemu2 : (39.951824375s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3284124500 -p stopped-upgrade-211000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3284124500 -p stopped-upgrade-211000 stop: (12.112671125s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-211000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0926 18:01:35.064213    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:04:38.031384    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 18:05:48.131101    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-211000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m44.304387875s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-211000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-211000" primary control-plane node in "stopped-upgrade-211000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-211000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:01:13.172483    4572 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:01:13.173007    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:01:13.173021    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:01:13.173028    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:01:13.173595    4572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:01:13.175076    4572 out.go:352] Setting JSON to false
	I0926 18:01:13.193988    4572 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3636,"bootTime":1727395237,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:01:13.194084    4572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:01:13.198925    4572 out.go:177] * [stopped-upgrade-211000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:01:13.205977    4572 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:01:13.206024    4572 notify.go:220] Checking for updates...
	I0926 18:01:13.212931    4572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:01:13.215893    4572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:01:13.219989    4572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:01:13.222981    4572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:01:13.225931    4572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:01:13.229245    4572 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:01:13.232909    4572 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0926 18:01:13.235935    4572 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:01:13.239952    4572 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 18:01:13.247885    4572 start.go:297] selected driver: qemu2
	I0926 18:01:13.247890    4572 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0926 18:01:13.247940    4572 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:01:13.250402    4572 cni.go:84] Creating CNI manager for ""
	I0926 18:01:13.250431    4572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:01:13.250448    4572 start.go:340] cluster config:
	{Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0926 18:01:13.250499    4572 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:01:13.257931    4572 out.go:177] * Starting "stopped-upgrade-211000" primary control-plane node in "stopped-upgrade-211000" cluster
	I0926 18:01:13.261846    4572 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0926 18:01:13.261861    4572 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0926 18:01:13.261867    4572 cache.go:56] Caching tarball of preloaded images
	I0926 18:01:13.261931    4572 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:01:13.261945    4572 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0926 18:01:13.261996    4572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/config.json ...
	I0926 18:01:13.262470    4572 start.go:360] acquireMachinesLock for stopped-upgrade-211000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:01:13.262512    4572 start.go:364] duration metric: took 34.542µs to acquireMachinesLock for "stopped-upgrade-211000"
	I0926 18:01:13.262520    4572 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:01:13.262525    4572 fix.go:54] fixHost starting: 
	I0926 18:01:13.262625    4572 fix.go:112] recreateIfNeeded on stopped-upgrade-211000: state=Stopped err=<nil>
	W0926 18:01:13.262634    4572 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:01:13.265940    4572 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-211000" ...
	I0926 18:01:13.273947    4572 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:01:13.274066    4572 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50504-:22,hostfwd=tcp::50505-:2376,hostname=stopped-upgrade-211000 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/disk.qcow2
	I0926 18:01:13.318181    4572 main.go:141] libmachine: STDOUT: 
	I0926 18:01:13.318205    4572 main.go:141] libmachine: STDERR: 
	I0926 18:01:13.318213    4572 main.go:141] libmachine: Waiting for VM to start (ssh -p 50504 docker@127.0.0.1)...
	I0926 18:01:33.869433    4572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/config.json ...
	I0926 18:01:33.870192    4572 machine.go:93] provisionDockerMachine start ...
	I0926 18:01:33.870353    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:33.870829    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:33.870843    4572 main.go:141] libmachine: About to run SSH command:
	hostname
	I0926 18:01:33.956494    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0926 18:01:33.956523    4572 buildroot.go:166] provisioning hostname "stopped-upgrade-211000"
	I0926 18:01:33.956663    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:33.956891    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:33.956903    4572 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-211000 && echo "stopped-upgrade-211000" | sudo tee /etc/hostname
	I0926 18:01:34.038777    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-211000
	
	I0926 18:01:34.038880    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:34.039091    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:34.039109    4572 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-211000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-211000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-211000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0926 18:01:34.110647    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0926 18:01:34.110662    4572 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19711-1075/.minikube CaCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19711-1075/.minikube}
	I0926 18:01:34.110671    4572 buildroot.go:174] setting up certificates
	I0926 18:01:34.110676    4572 provision.go:84] configureAuth start
	I0926 18:01:34.110684    4572 provision.go:143] copyHostCerts
	I0926 18:01:34.110769    4572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.pem, removing ...
	I0926 18:01:34.110777    4572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.pem
	I0926 18:01:34.110886    4572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.pem (1078 bytes)
	I0926 18:01:34.111074    4572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1075/.minikube/cert.pem, removing ...
	I0926 18:01:34.111079    4572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1075/.minikube/cert.pem
	I0926 18:01:34.111137    4572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/cert.pem (1123 bytes)
	I0926 18:01:34.111255    4572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19711-1075/.minikube/key.pem, removing ...
	I0926 18:01:34.111259    4572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19711-1075/.minikube/key.pem
	I0926 18:01:34.111310    4572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19711-1075/.minikube/key.pem (1679 bytes)
	I0926 18:01:34.111400    4572 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-211000 san=[127.0.0.1 localhost minikube stopped-upgrade-211000]
	I0926 18:01:34.360517    4572 provision.go:177] copyRemoteCerts
	I0926 18:01:34.360589    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0926 18:01:34.360601    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0926 18:01:34.396643    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0926 18:01:34.403243    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0926 18:01:34.409917    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0926 18:01:34.416906    4572 provision.go:87] duration metric: took 306.229542ms to configureAuth
	I0926 18:01:34.416915    4572 buildroot.go:189] setting minikube options for container-runtime
	I0926 18:01:34.417010    4572 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:01:34.417056    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:34.417141    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:34.417146    4572 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0926 18:01:34.483057    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0926 18:01:34.483067    4572 buildroot.go:70] root file system type: tmpfs
	I0926 18:01:34.483121    4572 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0926 18:01:34.483167    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:34.483283    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:34.483316    4572 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0926 18:01:34.552202    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0926 18:01:34.552273    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:34.552386    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:34.552395    4572 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0926 18:01:34.919340    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0926 18:01:34.919353    4572 machine.go:96] duration metric: took 1.049180708s to provisionDockerMachine
	I0926 18:01:34.919365    4572 start.go:293] postStartSetup for "stopped-upgrade-211000" (driver="qemu2")
	I0926 18:01:34.919371    4572 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0926 18:01:34.919437    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0926 18:01:34.919446    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0926 18:01:34.957997    4572 ssh_runner.go:195] Run: cat /etc/os-release
	I0926 18:01:34.959360    4572 info.go:137] Remote host: Buildroot 2021.02.12
	I0926 18:01:34.959369    4572 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1075/.minikube/addons for local assets ...
	I0926 18:01:34.959462    4572 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19711-1075/.minikube/files for local assets ...
	I0926 18:01:34.959588    4572 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem -> 15972.pem in /etc/ssl/certs
	I0926 18:01:34.959723    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0926 18:01:34.962654    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem --> /etc/ssl/certs/15972.pem (1708 bytes)
	I0926 18:01:34.970747    4572 start.go:296] duration metric: took 51.376666ms for postStartSetup
	I0926 18:01:34.970768    4572 fix.go:56] duration metric: took 21.708849208s for fixHost
	I0926 18:01:34.970817    4572 main.go:141] libmachine: Using SSH client type: native
	I0926 18:01:34.970939    4572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104af5c00] 0x104af8440 <nil>  [] 0s} localhost 50504 <nil> <nil>}
	I0926 18:01:34.970947    4572 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0926 18:01:35.034458    4572 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727398894.944262754
	
	I0926 18:01:35.034467    4572 fix.go:216] guest clock: 1727398894.944262754
	I0926 18:01:35.034472    4572 fix.go:229] Guest: 2024-09-26 18:01:34.944262754 -0700 PDT Remote: 2024-09-26 18:01:34.97077 -0700 PDT m=+21.828480918 (delta=-26.507246ms)
	I0926 18:01:35.034483    4572 fix.go:200] guest clock delta is within tolerance: -26.507246ms
	I0926 18:01:35.034486    4572 start.go:83] releasing machines lock for "stopped-upgrade-211000", held for 21.772578042s
	I0926 18:01:35.034556    4572 ssh_runner.go:195] Run: cat /version.json
	I0926 18:01:35.034565    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0926 18:01:35.034568    4572 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0926 18:01:35.034618    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	W0926 18:01:35.035171    4572 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50647->127.0.0.1:50504: read: connection reset by peer
	I0926 18:01:35.035187    4572 retry.go:31] will retry after 258.15249ms: ssh: handshake failed: read tcp 127.0.0.1:50647->127.0.0.1:50504: read: connection reset by peer
	W0926 18:01:35.066788    4572 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0926 18:01:35.066844    4572 ssh_runner.go:195] Run: systemctl --version
	I0926 18:01:35.068634    4572 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0926 18:01:35.070229    4572 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0926 18:01:35.070260    4572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0926 18:01:35.073543    4572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0926 18:01:35.078840    4572 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0926 18:01:35.078849    4572 start.go:495] detecting cgroup driver to use...
	I0926 18:01:35.078927    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:01:35.087146    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0926 18:01:35.090131    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0926 18:01:35.093577    4572 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0926 18:01:35.093603    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0926 18:01:35.097233    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:01:35.100997    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0926 18:01:35.104186    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0926 18:01:35.107000    4572 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0926 18:01:35.109872    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0926 18:01:35.113293    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0926 18:01:35.116792    4572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0926 18:01:35.120082    4572 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0926 18:01:35.122616    4572 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0926 18:01:35.125689    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:35.196788    4572 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0926 18:01:35.203508    4572 start.go:495] detecting cgroup driver to use...
	I0926 18:01:35.203589    4572 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0926 18:01:35.208712    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:01:35.213614    4572 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0926 18:01:35.223128    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0926 18:01:35.227755    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:01:35.232014    4572 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0926 18:01:35.272387    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0926 18:01:35.277358    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0926 18:01:35.282924    4572 ssh_runner.go:195] Run: which cri-dockerd
	I0926 18:01:35.284152    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0926 18:01:35.286640    4572 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0926 18:01:35.291570    4572 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0926 18:01:35.372366    4572 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0926 18:01:35.447166    4572 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0926 18:01:35.447226    4572 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0926 18:01:35.452826    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:35.524626    4572 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 18:01:36.638958    4572 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.11434775s)
	I0926 18:01:36.639027    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0926 18:01:36.643525    4572 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0926 18:01:36.649496    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 18:01:36.653895    4572 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0926 18:01:36.732564    4572 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0926 18:01:36.813653    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:36.893550    4572 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0926 18:01:36.899407    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0926 18:01:36.903551    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:36.983964    4572 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0926 18:01:37.021824    4572 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0926 18:01:37.021911    4572 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0926 18:01:37.023911    4572 start.go:563] Will wait 60s for crictl version
	I0926 18:01:37.023967    4572 ssh_runner.go:195] Run: which crictl
	I0926 18:01:37.025469    4572 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0926 18:01:37.039876    4572 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0926 18:01:37.039949    4572 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 18:01:37.056116    4572 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0926 18:01:37.077725    4572 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0926 18:01:37.077809    4572 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0926 18:01:37.079082    4572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 18:01:37.082620    4572 kubeadm.go:883] updating cluster {Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0926 18:01:37.082662    4572 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0926 18:01:37.082719    4572 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 18:01:37.095629    4572 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 18:01:37.095637    4572 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0926 18:01:37.095686    4572 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0926 18:01:37.098750    4572 ssh_runner.go:195] Run: which lz4
	I0926 18:01:37.100092    4572 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0926 18:01:37.101319    4572 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0926 18:01:37.101330    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0926 18:01:38.102998    4572 docker.go:649] duration metric: took 1.002984333s to copy over tarball
	I0926 18:01:38.103077    4572 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0926 18:01:39.252756    4572 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.14969725s)
	I0926 18:01:39.252769    4572 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0926 18:01:39.268275    4572 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0926 18:01:39.271576    4572 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0926 18:01:39.276715    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:39.355927    4572 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0926 18:01:40.839843    4572 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.483941959s)
	I0926 18:01:40.839971    4572 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0926 18:01:40.851336    4572 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0926 18:01:40.851344    4572 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0926 18:01:40.851349    4572 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0926 18:01:40.856383    4572 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:01:40.858546    4572 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0926 18:01:40.860828    4572 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0926 18:01:40.861056    4572 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:01:40.862725    4572 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 18:01:40.862747    4572 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0926 18:01:40.864158    4572 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0926 18:01:40.864177    4572 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0926 18:01:40.865485    4572 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0926 18:01:40.865561    4572 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 18:01:40.866830    4572 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0926 18:01:40.867007    4572 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0926 18:01:40.868220    4572 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0926 18:01:40.868314    4572 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0926 18:01:40.869230    4572 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0926 18:01:40.869826    4572 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0926 18:01:41.299109    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0926 18:01:41.309760    4572 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0926 18:01:41.309790    4572 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0926 18:01:41.309858    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0926 18:01:41.319127    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0926 18:01:41.320320    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0926 18:01:41.320629    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0926 18:01:41.329584    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 18:01:41.331374    4572 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0926 18:01:41.331391    4572 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0926 18:01:41.331401    4572 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0926 18:01:41.331392    4572 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0926 18:01:41.331453    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0926 18:01:41.331499    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0926 18:01:41.342582    4572 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0926 18:01:41.342603    4572 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0926 18:01:41.342672    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	W0926 18:01:41.354509    4572 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0926 18:01:41.354658    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0926 18:01:41.355344    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0926 18:01:41.355370    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0926 18:01:41.362673    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0926 18:01:41.367837    4572 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0926 18:01:41.367856    4572 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0926 18:01:41.367920    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0926 18:01:41.376632    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0926 18:01:41.384714    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0926 18:01:41.384850    4572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0926 18:01:41.387675    4572 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0926 18:01:41.387689    4572 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0926 18:01:41.387696    4572 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0926 18:01:41.387707    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0926 18:01:41.387747    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0926 18:01:41.394086    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0926 18:01:41.426457    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0926 18:01:41.426579    4572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0926 18:01:41.426809    4572 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0926 18:01:41.426826    4572 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0926 18:01:41.426868    4572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0926 18:01:41.437990    4572 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0926 18:01:41.438018    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0926 18:01:41.441449    4572 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0926 18:01:41.441459    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0926 18:01:41.458079    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0926 18:01:41.458210    4572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0926 18:01:41.485094    4572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0926 18:01:41.485116    4572 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0926 18:01:41.485122    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0926 18:01:41.485134    4572 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0926 18:01:41.485155    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0926 18:01:41.523003    4572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0926 18:01:41.706255    4572 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0926 18:01:41.706278    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0926 18:01:41.826890    4572 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0926 18:01:41.827009    4572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:01:41.844017    4572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0926 18:01:41.844347    4572 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0926 18:01:41.844371    4572 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:01:41.844446    4572 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:01:41.857521    4572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0926 18:01:41.857651    4572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0926 18:01:41.859157    4572 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0926 18:01:41.859169    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0926 18:01:41.887809    4572 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0926 18:01:41.887824    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0926 18:01:42.119705    4572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0926 18:01:42.119753    4572 cache_images.go:92] duration metric: took 1.268431292s to LoadCachedImages
	W0926 18:01:42.119803    4572 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0926 18:01:42.119809    4572 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0926 18:01:42.119855    4572 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-211000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0926 18:01:42.119942    4572 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0926 18:01:42.133154    4572 cni.go:84] Creating CNI manager for ""
	I0926 18:01:42.133166    4572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:01:42.133171    4572 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0926 18:01:42.133179    4572 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-211000 NodeName:stopped-upgrade-211000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0926 18:01:42.133244    4572 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-211000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0926 18:01:42.133301    4572 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0926 18:01:42.136973    4572 binaries.go:44] Found k8s binaries, skipping transfer
	I0926 18:01:42.137020    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0926 18:01:42.139780    4572 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0926 18:01:42.144409    4572 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0926 18:01:42.149449    4572 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0926 18:01:42.154960    4572 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0926 18:01:42.156003    4572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0926 18:01:42.159664    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:01:42.236386    4572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 18:01:42.241716    4572 certs.go:68] Setting up /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000 for IP: 10.0.2.15
	I0926 18:01:42.241726    4572 certs.go:194] generating shared ca certs ...
	I0926 18:01:42.241736    4572 certs.go:226] acquiring lock for ca certs: {Name:mk27a718ead98149a4ca4d0cc52012d8aa60b9f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:01:42.241903    4572 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.key
	I0926 18:01:42.241958    4572 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.key
	I0926 18:01:42.241965    4572 certs.go:256] generating profile certs ...
	I0926 18:01:42.242040    4572 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/client.key
	I0926 18:01:42.242056    4572 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c
	I0926 18:01:42.242064    4572 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0926 18:01:42.351424    4572 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c ...
	I0926 18:01:42.351440    4572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c: {Name:mkdb72198780a42d20f224a6157ee1d5d04fb741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:01:42.351770    4572 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c ...
	I0926 18:01:42.351778    4572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c: {Name:mk7cd4a50e2508f8f479fffc7d9c3adfbafa760a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:01:42.351913    4572 certs.go:381] copying /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt.a3531d9c -> /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt
	I0926 18:01:42.352064    4572 certs.go:385] copying /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key.a3531d9c -> /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key
	I0926 18:01:42.352232    4572 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/proxy-client.key
	I0926 18:01:42.352374    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/1597.pem (1338 bytes)
	W0926 18:01:42.352408    4572 certs.go:480] ignoring /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/1597_empty.pem, impossibly tiny 0 bytes
	I0926 18:01:42.352414    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca-key.pem (1679 bytes)
	I0926 18:01:42.352438    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem (1078 bytes)
	I0926 18:01:42.352455    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem (1123 bytes)
	I0926 18:01:42.352476    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/key.pem (1679 bytes)
	I0926 18:01:42.352512    4572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem (1708 bytes)
	I0926 18:01:42.352828    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0926 18:01:42.360169    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0926 18:01:42.367090    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0926 18:01:42.373721    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0926 18:01:42.381170    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0926 18:01:42.388437    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0926 18:01:42.395627    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0926 18:01:42.402603    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0926 18:01:42.409702    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0926 18:01:42.416872    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/1597.pem --> /usr/share/ca-certificates/1597.pem (1338 bytes)
	I0926 18:01:42.423983    4572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/ssl/certs/15972.pem --> /usr/share/ca-certificates/15972.pem (1708 bytes)
	I0926 18:01:42.430667    4572 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0926 18:01:42.435646    4572 ssh_runner.go:195] Run: openssl version
	I0926 18:01:42.437482    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15972.pem && ln -fs /usr/share/ca-certificates/15972.pem /etc/ssl/certs/15972.pem"
	I0926 18:01:42.440873    4572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15972.pem
	I0926 18:01:42.442393    4572 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:28 /usr/share/ca-certificates/15972.pem
	I0926 18:01:42.442419    4572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15972.pem
	I0926 18:01:42.444269    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15972.pem /etc/ssl/certs/3ec20f2e.0"
	I0926 18:01:42.447231    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0926 18:01:42.450130    4572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:01:42.451523    4572 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:01:42.451554    4572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0926 18:01:42.453170    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0926 18:01:42.456255    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1597.pem && ln -fs /usr/share/ca-certificates/1597.pem /etc/ssl/certs/1597.pem"
	I0926 18:01:42.459356    4572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1597.pem
	I0926 18:01:42.460708    4572 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:28 /usr/share/ca-certificates/1597.pem
	I0926 18:01:42.460731    4572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1597.pem
	I0926 18:01:42.462544    4572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1597.pem /etc/ssl/certs/51391683.0"
	I0926 18:01:42.466809    4572 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0926 18:01:42.468191    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0926 18:01:42.469960    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0926 18:01:42.471675    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0926 18:01:42.473497    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0926 18:01:42.475239    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0926 18:01:42.476856    4572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0926 18:01:42.478723    4572 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-211000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50538 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-211000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0926 18:01:42.478791    4572 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 18:01:42.489085    4572 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0926 18:01:42.492560    4572 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0926 18:01:42.492570    4572 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0926 18:01:42.492604    4572 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0926 18:01:42.495478    4572 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0926 18:01:42.495806    4572 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-211000" does not appear in /Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:01:42.495896    4572 kubeconfig.go:62] /Users/jenkins/minikube-integration/19711-1075/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-211000" cluster setting kubeconfig missing "stopped-upgrade-211000" context setting]
	I0926 18:01:42.496086    4572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/kubeconfig: {Name:mk9560fb3377d007cf139de436457ca7aa0f8d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:01:42.496514    4572 kapi.go:59] client config for stopped-upgrade-211000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060ce570), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 18:01:42.496846    4572 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0926 18:01:42.499376    4572 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-211000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0926 18:01:42.499381    4572 kubeadm.go:1160] stopping kube-system containers ...
	I0926 18:01:42.499440    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0926 18:01:42.510072    4572 docker.go:483] Stopping containers: [240fdc9989e4 6389d9bb1ecd aaaef996b4e8 6707ec992f36 1b1da32ebdf8 cbdda73835f3 0be1021df9b4 ec810a93628b]
	I0926 18:01:42.510162    4572 ssh_runner.go:195] Run: docker stop 240fdc9989e4 6389d9bb1ecd aaaef996b4e8 6707ec992f36 1b1da32ebdf8 cbdda73835f3 0be1021df9b4 ec810a93628b
	I0926 18:01:42.520433    4572 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0926 18:01:42.525965    4572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 18:01:42.529237    4572 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 18:01:42.529249    4572 kubeadm.go:157] found existing configuration files:
	
	I0926 18:01:42.529277    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0926 18:01:42.532270    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 18:01:42.532294    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 18:01:42.534897    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0926 18:01:42.537569    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 18:01:42.537599    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 18:01:42.540505    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0926 18:01:42.542924    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 18:01:42.542947    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 18:01:42.545653    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0926 18:01:42.548590    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 18:01:42.548614    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 18:01:42.551171    4572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 18:01:42.553920    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:01:42.577058    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:01:42.874306    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:01:43.013990    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:01:43.046216    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0926 18:01:43.072410    4572 api_server.go:52] waiting for apiserver process to appear ...
	I0926 18:01:43.072509    4572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:01:43.573820    4572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:01:44.074542    4572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:01:44.078899    4572 api_server.go:72] duration metric: took 1.006521167s to wait for apiserver process to appear ...
	I0926 18:01:44.078908    4572 api_server.go:88] waiting for apiserver healthz status ...
	I0926 18:01:44.078924    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:49.080966    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:49.081074    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:54.081837    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:54.081891    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:01:59.082552    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:01:59.082626    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:04.083435    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:04.083518    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:09.084785    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:09.084835    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:14.086307    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:14.086396    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:19.087980    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:19.088005    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:24.090149    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:24.090184    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:29.092337    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:29.092389    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:34.094562    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:34.094599    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:39.096744    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:39.096807    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:44.098823    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:44.099350    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:02:44.136543    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:02:44.136704    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:02:44.157249    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:02:44.157371    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:02:44.172776    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:02:44.172874    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:02:44.185523    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:02:44.185603    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:02:44.196606    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:02:44.196677    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:02:44.207299    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:02:44.207367    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:02:44.220676    4572 logs.go:276] 0 containers: []
	W0926 18:02:44.220700    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:02:44.220772    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:02:44.231211    4572 logs.go:276] 0 containers: []
	W0926 18:02:44.231222    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:02:44.231230    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:02:44.231235    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:02:44.243329    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:02:44.243338    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:02:44.283586    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:02:44.283597    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:02:44.299062    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:02:44.299073    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:02:44.316346    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:02:44.316357    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:02:44.330540    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:02:44.330551    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:02:44.356227    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:02:44.356236    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:02:44.397894    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:02:44.397904    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:02:44.409057    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:02:44.409068    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:02:44.421137    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:02:44.421149    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:02:44.438738    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:02:44.438749    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:02:44.452630    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:02:44.452645    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:02:44.457234    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:02:44.457243    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:02:44.536201    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:02:44.536215    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:02:44.554769    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:02:44.554790    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:02:47.072587    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:52.074787    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:52.075008    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:02:52.091254    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:02:52.091343    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:02:52.104292    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:02:52.104381    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:02:52.121325    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:02:52.121409    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:02:52.136836    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:02:52.136931    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:02:52.147065    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:02:52.147137    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:02:52.161853    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:02:52.161926    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:02:52.172721    4572 logs.go:276] 0 containers: []
	W0926 18:02:52.172739    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:02:52.172813    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:02:52.183521    4572 logs.go:276] 0 containers: []
	W0926 18:02:52.183532    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:02:52.183538    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:02:52.183543    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:02:52.195170    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:02:52.195182    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:02:52.209930    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:02:52.209943    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:02:52.221683    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:02:52.221695    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:02:52.239064    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:02:52.239078    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:02:52.252559    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:02:52.252570    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:02:52.279518    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:02:52.279528    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:02:52.318588    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:02:52.318601    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:02:52.332560    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:02:52.332569    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:02:52.346726    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:02:52.346740    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:02:52.382846    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:02:52.382857    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:02:52.396570    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:02:52.396590    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:02:52.433852    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:02:52.433860    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:02:52.437823    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:02:52.437829    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:02:52.449282    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:02:52.449294    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:02:54.962470    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:02:59.963954    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:02:59.964121    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:02:59.981318    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:02:59.981418    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:02:59.993937    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:02:59.994060    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:00.004576    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:00.004655    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:00.014909    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:00.014980    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:00.026166    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:00.026246    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:00.036682    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:00.036762    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:00.046836    4572 logs.go:276] 0 containers: []
	W0926 18:03:00.046849    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:00.046918    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:00.065099    4572 logs.go:276] 0 containers: []
	W0926 18:03:00.065112    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:00.065120    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:00.065126    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:00.090324    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:00.090335    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:00.129049    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:00.129058    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:00.133186    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:00.133193    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:00.167681    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:00.167695    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:00.181626    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:00.181637    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:00.193144    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:00.193157    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:00.213070    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:00.213080    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:00.226419    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:00.226430    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:00.113806    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:00.113816    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:00.125743    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:00.125757    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:00.147930    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:00.147942    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:00.160175    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:00.160186    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:00.198536    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:00.198549    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:00.213703    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:00.213714    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:02.727600    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:07.729810    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:07.729975    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:07.741271    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:07.741373    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:07.752185    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:07.752277    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:07.762660    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:07.762747    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:07.773216    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:07.773308    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:07.783998    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:07.784077    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:07.794816    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:07.794887    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:07.805353    4572 logs.go:276] 0 containers: []
	W0926 18:03:07.805366    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:07.805440    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:07.816445    4572 logs.go:276] 0 containers: []
	W0926 18:03:07.816459    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:07.816467    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:07.816473    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:07.820770    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:07.820777    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:07.855300    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:07.855313    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:07.896842    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:07.896854    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:07.910788    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:07.910798    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:07.935056    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:07.935070    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:07.947416    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:07.947427    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:07.986026    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:07.986033    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:08.004048    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:08.004062    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:08.021210    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:08.021219    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:08.033430    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:08.033441    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:08.051418    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:08.051429    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:08.062920    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:08.062932    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:08.078201    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:08.078211    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:08.089819    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:08.089832    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:10.604967    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:15.607167    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:15.607497    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:15.633379    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:15.633505    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:15.651534    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:15.651629    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:15.665296    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:15.665381    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:15.676935    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:15.677018    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:15.687559    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:15.687636    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:15.702772    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:15.702849    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:15.712806    4572 logs.go:276] 0 containers: []
	W0926 18:03:15.712819    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:15.712891    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:15.722858    4572 logs.go:276] 0 containers: []
	W0926 18:03:15.722869    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:15.722879    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:15.722884    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:15.761674    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:15.761684    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:15.775512    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:15.775521    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:15.793834    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:15.793844    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:15.808766    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:15.808780    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:15.819856    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:15.819868    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:15.831589    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:15.831603    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:15.846749    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:15.846760    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:15.882711    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:15.882722    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:15.908424    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:15.908432    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:15.912457    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:15.912464    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:15.929637    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:15.929646    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:15.942854    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:15.942864    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:15.980733    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:15.980748    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:15.993099    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:15.993114    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:18.507117    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:23.508383    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:23.508473    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:23.520529    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:23.520616    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:23.531746    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:23.531833    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:23.542285    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:23.542366    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:23.554261    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:23.554346    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:23.566258    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:23.566418    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:23.577864    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:23.577944    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:23.588431    4572 logs.go:276] 0 containers: []
	W0926 18:03:23.588442    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:23.588510    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:23.598947    4572 logs.go:276] 0 containers: []
	W0926 18:03:23.598957    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:23.598964    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:23.598969    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:23.639827    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:23.639841    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:23.644439    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:23.644448    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:23.659704    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:23.659715    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:23.678055    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:23.678070    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:23.689236    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:23.689252    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:23.700997    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:23.701013    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:23.713100    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:23.713111    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:23.735573    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:23.735588    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:23.760737    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:23.760747    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:23.795005    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:23.795018    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:23.833737    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:23.833751    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:23.845640    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:23.845651    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:23.863073    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:23.863087    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:23.876201    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:23.876212    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:26.393881    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:31.396179    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:31.396292    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:31.408478    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:31.408563    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:31.426206    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:31.426289    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:31.438437    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:31.438527    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:31.450462    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:31.450553    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:31.462255    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:31.462337    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:31.474106    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:31.474186    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:31.485546    4572 logs.go:276] 0 containers: []
	W0926 18:03:31.485560    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:31.485640    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:31.510839    4572 logs.go:276] 0 containers: []
	W0926 18:03:31.510853    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:31.510861    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:31.510867    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:31.514931    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:31.514938    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:31.555172    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:31.555187    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:31.569378    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:31.569390    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:31.583517    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:31.583531    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:31.597694    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:31.597709    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:31.614769    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:31.614783    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:31.638298    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:31.638312    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:31.650902    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:31.650917    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:31.662849    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:31.662864    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:31.674491    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:31.674506    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:31.688315    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:31.688330    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:31.725636    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:31.725644    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:31.761749    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:31.761760    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:31.773638    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:31.773649    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:34.290849    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:39.292932    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:39.293025    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:39.304344    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:39.304434    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:39.316034    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:39.316117    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:39.326409    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:39.326492    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:39.336634    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:39.336717    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:39.347252    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:39.347324    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:39.360130    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:39.360210    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:39.370232    4572 logs.go:276] 0 containers: []
	W0926 18:03:39.370250    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:39.370322    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:39.380401    4572 logs.go:276] 0 containers: []
	W0926 18:03:39.380413    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:39.380420    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:39.380426    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:39.417369    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:39.417381    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:39.434946    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:39.434959    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:39.459585    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:39.459592    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:39.463742    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:39.463749    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:39.502623    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:39.502649    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:39.517111    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:39.517121    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:39.532594    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:39.532607    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:39.546409    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:39.546419    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:39.559923    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:39.559933    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:39.573921    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:39.573932    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:39.590842    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:39.590855    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:39.602591    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:39.602601    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:39.641175    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:39.641185    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:39.655357    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:39.655368    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:42.172089    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:47.174155    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:47.174317    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:47.185409    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:47.185494    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:47.196373    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:47.196461    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:47.207224    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:47.207305    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:47.217348    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:47.217430    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:47.235285    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:47.235370    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:47.245954    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:47.246037    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:47.260637    4572 logs.go:276] 0 containers: []
	W0926 18:03:47.260648    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:47.260728    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:47.271529    4572 logs.go:276] 0 containers: []
	W0926 18:03:47.271542    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:47.271551    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:47.271556    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:47.308971    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:47.308980    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:47.346361    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:47.346372    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:47.361054    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:47.361067    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:47.375959    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:47.375969    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:47.387433    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:47.387445    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:47.406472    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:47.406483    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:47.429524    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:47.429530    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:47.441159    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:47.441170    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:47.454080    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:47.454090    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:47.489626    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:47.489635    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:47.503801    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:47.503811    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:47.518297    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:47.518308    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:47.522423    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:47.522429    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:47.536592    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:47.536611    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:50.062244    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:03:55.064208    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:03:55.064319    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:03:55.075868    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:03:55.075972    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:03:55.091401    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:03:55.091487    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:03:55.101972    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:03:55.102052    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:03:55.113148    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:03:55.113227    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:03:55.123200    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:03:55.123285    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:03:55.133727    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:03:55.133810    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:03:55.143732    4572 logs.go:276] 0 containers: []
	W0926 18:03:55.143742    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:03:55.143810    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:03:55.154299    4572 logs.go:276] 0 containers: []
	W0926 18:03:55.154310    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:03:55.154316    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:03:55.154322    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:03:55.158600    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:03:55.158608    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:03:55.172454    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:03:55.172468    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:03:55.184588    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:03:55.184600    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:03:55.223994    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:03:55.224013    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:03:55.262523    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:03:55.262535    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:03:55.276675    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:03:55.276688    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:03:55.290841    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:03:55.290857    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:03:55.302607    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:03:55.302620    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:03:55.325837    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:03:55.325845    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:03:55.363212    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:03:55.363226    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:03:55.377710    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:03:55.377722    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:03:55.389466    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:03:55.389476    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:03:55.404687    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:03:55.404700    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:03:55.421867    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:03:55.421881    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:03:57.935478    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:02.937504    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:02.937662    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:02.948343    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:02.948430    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:02.958927    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:02.959012    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:02.968970    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:02.969043    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:02.980266    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:02.980348    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:02.991243    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:02.991327    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:03.002937    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:03.003025    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:03.015268    4572 logs.go:276] 0 containers: []
	W0926 18:04:03.015280    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:03.015355    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:03.025857    4572 logs.go:276] 0 containers: []
	W0926 18:04:03.025874    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:03.025880    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:03.025886    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:03.045864    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:03.045877    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:03.063505    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:03.063514    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:03.075999    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:03.076013    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:03.090929    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:03.090943    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:03.128232    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:03.128253    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:03.171673    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:03.171684    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:03.185635    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:03.185645    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:03.197199    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:03.197213    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:03.208657    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:03.208667    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:03.213091    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:03.213099    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:03.247374    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:03.247389    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:03.262229    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:03.262242    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:03.279960    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:03.279971    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:03.302753    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:03.302761    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:05.815978    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:10.817919    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:10.818044    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:10.830548    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:10.830635    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:10.841055    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:10.841147    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:10.851624    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:10.851708    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:10.862386    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:10.862476    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:10.873158    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:10.873245    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:10.883988    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:10.884066    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:10.893983    4572 logs.go:276] 0 containers: []
	W0926 18:04:10.893995    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:10.894063    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:10.904339    4572 logs.go:276] 0 containers: []
	W0926 18:04:10.904353    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:10.904362    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:10.904368    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:10.923952    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:10.923964    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:10.936090    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:10.936104    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:10.958957    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:10.958965    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:10.972022    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:10.972035    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:11.010000    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:11.010016    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:11.043886    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:11.043896    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:11.062126    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:11.062138    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:11.076159    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:11.076170    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:11.091424    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:11.091435    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:11.134001    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:11.134018    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:11.138320    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:11.138327    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:11.153305    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:11.153315    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:11.164512    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:11.164522    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:11.181144    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:11.181154    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:13.696037    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:18.698104    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:18.698228    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:18.713931    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:18.714019    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:18.724124    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:18.724209    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:18.738750    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:18.738831    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:18.749342    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:18.749422    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:18.759812    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:18.759901    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:18.772120    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:18.772202    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:18.783407    4572 logs.go:276] 0 containers: []
	W0926 18:04:18.783418    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:18.783490    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:18.794699    4572 logs.go:276] 0 containers: []
	W0926 18:04:18.794711    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:18.794718    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:18.794723    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:18.798734    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:18.798739    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:18.811416    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:18.811431    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:18.826524    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:18.826534    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:18.843264    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:18.843276    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:18.854928    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:18.854943    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:18.891640    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:18.891650    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:18.905493    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:18.905504    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:18.951982    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:18.951993    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:18.966412    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:18.966423    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:19.002650    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:19.002662    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:19.016481    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:19.016495    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:19.027573    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:19.027583    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:19.044865    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:19.044875    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:19.058706    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:19.058716    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:21.585305    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:26.587432    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:26.587659    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:26.610487    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:26.610617    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:26.626693    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:26.626792    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:26.640599    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:26.640687    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:26.651677    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:26.651764    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:26.662496    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:26.662578    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:26.673028    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:26.673108    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:26.683312    4572 logs.go:276] 0 containers: []
	W0926 18:04:26.683323    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:26.683390    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:26.693764    4572 logs.go:276] 0 containers: []
	W0926 18:04:26.693776    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:26.693784    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:26.693790    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:26.705313    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:26.705326    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:26.720170    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:26.720184    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:26.734859    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:26.734868    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:26.749760    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:26.749775    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:26.761720    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:26.761731    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:26.784686    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:26.784693    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:26.796861    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:26.796876    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:26.836826    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:26.836845    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:26.872418    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:26.872430    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:26.912424    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:26.912436    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:26.924008    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:26.924021    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:26.941228    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:26.941238    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:26.957296    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:26.957306    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:26.961602    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:26.961608    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:29.478195    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:34.480524    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:34.480784    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:34.503599    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:34.503747    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:34.519858    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:34.519951    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:34.533201    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:34.533288    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:34.543866    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:34.543952    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:34.554630    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:34.554704    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:34.565212    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:34.565296    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:34.576040    4572 logs.go:276] 0 containers: []
	W0926 18:04:34.576052    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:34.576119    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:34.586280    4572 logs.go:276] 0 containers: []
	W0926 18:04:34.586290    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:34.586298    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:34.586303    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:34.599425    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:34.599435    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:34.623791    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:34.623798    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:34.635929    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:34.635939    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:34.652527    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:34.652543    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:34.670869    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:34.670880    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:34.682314    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:34.682324    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:34.704574    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:34.704587    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:34.720418    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:34.720435    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:34.736305    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:34.736316    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:34.749969    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:34.749982    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:34.788759    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:34.788779    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:34.829440    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:34.829454    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:34.846252    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:34.846273    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:34.851363    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:34.851375    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:37.392969    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:42.395027    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:42.395195    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:42.409260    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:42.409358    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:42.421022    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:42.421108    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:42.431575    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:42.431657    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:42.441734    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:42.441818    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:42.458192    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:42.458275    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:42.468839    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:42.468917    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:42.479620    4572 logs.go:276] 0 containers: []
	W0926 18:04:42.479631    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:42.479706    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:42.489463    4572 logs.go:276] 0 containers: []
	W0926 18:04:42.489475    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:42.489484    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:42.489490    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:42.500631    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:42.500643    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:42.512431    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:42.512442    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:42.551527    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:42.551538    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:42.565132    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:42.565142    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:42.581769    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:42.581780    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:42.593602    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:42.593611    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:42.609145    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:42.609155    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:42.621252    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:42.621266    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:42.645440    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:42.645450    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:42.659261    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:42.659273    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:42.672102    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:42.672112    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:42.676671    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:42.676681    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:42.712806    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:42.712821    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:42.730823    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:42.730838    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:45.271098    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:50.271594    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:50.271777    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:50.282930    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:50.283015    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:50.293530    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:50.293601    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:50.304138    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:50.304217    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:50.314501    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:50.314577    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:50.326819    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:50.326901    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:50.337705    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:50.337787    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:50.348242    4572 logs.go:276] 0 containers: []
	W0926 18:04:50.348257    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:50.348325    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:50.358694    4572 logs.go:276] 0 containers: []
	W0926 18:04:50.358704    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:50.358712    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:50.358718    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:50.397203    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:50.397211    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:50.401139    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:50.401144    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:50.415219    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:50.415228    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:04:50.427512    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:50.427522    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:50.441309    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:50.441319    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:50.453064    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:50.453075    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:50.465016    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:50.465031    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:50.476524    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:50.476536    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:50.493831    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:50.493842    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:50.506724    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:50.506734    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:50.529486    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:50.529494    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:50.566162    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:50.566177    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:50.584260    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:50.584274    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:50.622844    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:50.622855    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:53.142762    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:04:58.144783    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:04:58.144941    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:04:58.156289    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:04:58.156370    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:04:58.166481    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:04:58.166568    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:04:58.178004    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:04:58.178091    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:04:58.189928    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:04:58.190015    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:04:58.201108    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:04:58.201188    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:04:58.211796    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:04:58.211868    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:04:58.221945    4572 logs.go:276] 0 containers: []
	W0926 18:04:58.221957    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:04:58.222030    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:04:58.234850    4572 logs.go:276] 0 containers: []
	W0926 18:04:58.234862    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:04:58.234869    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:04:58.234875    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:04:58.239572    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:04:58.239587    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:04:58.254316    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:04:58.254331    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:04:58.265678    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:04:58.265687    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:04:58.278474    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:04:58.278489    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:04:58.314297    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:04:58.314307    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:04:58.328546    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:04:58.328556    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:04:58.343647    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:04:58.343658    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:04:58.367942    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:04:58.367952    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:04:58.379188    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:04:58.379201    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:04:58.391918    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:04:58.391929    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:04:58.415611    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:04:58.415619    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:04:58.454173    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:04:58.454189    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:04:58.493133    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:04:58.493147    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:04:58.510295    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:04:58.510310    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:01.024304    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:06.026434    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:06.026701    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:06.046850    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:05:06.046957    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:06.066530    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:05:06.066607    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:06.077336    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:05:06.077415    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:06.087972    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:05:06.088056    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:06.111341    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:05:06.111422    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:06.127244    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:05:06.127334    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:06.141008    4572 logs.go:276] 0 containers: []
	W0926 18:05:06.141024    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:06.141084    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:06.152001    4572 logs.go:276] 0 containers: []
	W0926 18:05:06.152013    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:05:06.152021    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:05:06.152027    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:05:06.163492    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:05:06.163503    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:05:06.175261    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:05:06.175276    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:05:06.186973    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:05:06.186984    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:05:06.204419    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:06.204429    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:06.228010    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:05:06.228018    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:06.239746    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:05:06.239757    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:05:06.254798    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:05:06.254811    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:05:06.268869    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:05:06.268882    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:05:06.306719    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:05:06.306733    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:05:06.321748    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:05:06.321760    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:05:06.334441    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:06.334453    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:06.373439    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:05:06.373449    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:05:06.387319    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:06.387330    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:06.391682    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:06.391688    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:08.928217    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:13.930530    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:13.930837    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:13.959011    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:05:13.959140    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:13.977231    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:05:13.977341    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:13.990727    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:05:13.990820    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:14.004158    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:05:14.004244    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:14.014392    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:05:14.014473    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:14.025148    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:05:14.025232    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:14.035279    4572 logs.go:276] 0 containers: []
	W0926 18:05:14.035290    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:14.035365    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:14.046133    4572 logs.go:276] 0 containers: []
	W0926 18:05:14.046145    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:05:14.046153    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:05:14.046159    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:05:14.060447    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:05:14.060457    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:14.071937    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:14.071953    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:14.076275    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:05:14.076285    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:05:14.114586    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:14.114599    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:14.138002    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:05:14.138016    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:05:14.153619    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:05:14.153632    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:05:14.165939    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:05:14.165954    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:05:14.178185    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:05:14.178196    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:05:14.196431    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:14.196447    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:14.232652    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:05:14.232668    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:05:14.247227    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:05:14.247240    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:05:14.258943    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:05:14.258955    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:05:14.272384    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:14.272396    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:14.311453    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:05:14.311461    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:05:16.826879    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:21.828643    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:21.828801    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:21.840745    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:05:21.840836    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:21.851404    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:05:21.851497    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:21.862660    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:05:21.862748    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:21.873801    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:05:21.873884    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:21.884692    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:05:21.884774    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:21.901660    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:05:21.901736    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:21.912511    4572 logs.go:276] 0 containers: []
	W0926 18:05:21.912526    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:21.912600    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:21.922482    4572 logs.go:276] 0 containers: []
	W0926 18:05:21.922499    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:05:21.922508    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:05:21.922513    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:05:21.938279    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:05:21.938294    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:05:21.951081    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:21.951095    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:21.988135    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:05:21.988141    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:05:22.000812    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:05:22.000823    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:05:22.029368    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:22.029378    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:22.051697    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:05:22.051707    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:22.063267    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:05:22.063280    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:05:22.076918    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:05:22.076933    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:05:22.091358    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:05:22.091370    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:05:22.102493    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:05:22.102504    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:05:22.117650    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:22.117662    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:22.122202    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:22.122210    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:22.157591    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:05:22.157604    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:05:22.196818    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:05:22.196830    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:05:24.712247    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:29.714402    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:29.714659    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:29.736466    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:05:29.736565    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:29.751592    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:05:29.751691    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:29.764035    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:05:29.764120    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:29.774454    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:05:29.774542    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:29.785470    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:05:29.785558    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:29.803604    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:05:29.803686    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:29.814209    4572 logs.go:276] 0 containers: []
	W0926 18:05:29.814228    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:29.814303    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:29.824291    4572 logs.go:276] 0 containers: []
	W0926 18:05:29.824301    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:05:29.824311    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:05:29.824316    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:05:29.838412    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:05:29.838428    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:05:29.853287    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:29.853298    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:29.877106    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:05:29.877115    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:29.890283    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:29.890297    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:29.894558    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:05:29.894566    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:05:29.932865    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:05:29.932875    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:05:29.944811    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:29.944820    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:29.984721    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:29.984737    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:30.022443    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:05:30.022455    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:05:30.037781    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:05:30.037797    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:05:30.049946    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:05:30.049955    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:05:30.063453    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:05:30.063464    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:05:30.074291    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:05:30.074306    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:05:30.089309    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:05:30.089321    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:05:32.611915    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:37.614028    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:37.614193    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:05:37.628793    4572 logs.go:276] 2 containers: [6ed036197ac8 6707ec992f36]
	I0926 18:05:37.628892    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:05:37.641282    4572 logs.go:276] 2 containers: [42d8888f48e4 6389d9bb1ecd]
	I0926 18:05:37.641358    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:05:37.652847    4572 logs.go:276] 1 containers: [13d290387e07]
	I0926 18:05:37.652927    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:05:37.663528    4572 logs.go:276] 2 containers: [a39c8cf60874 aaaef996b4e8]
	I0926 18:05:37.663614    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:05:37.674489    4572 logs.go:276] 1 containers: [10c5ead2a521]
	I0926 18:05:37.674562    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:05:37.684861    4572 logs.go:276] 2 containers: [025fbbdc414c 240fdc9989e4]
	I0926 18:05:37.684943    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:05:37.702825    4572 logs.go:276] 0 containers: []
	W0926 18:05:37.702837    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:05:37.702908    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:05:37.712800    4572 logs.go:276] 0 containers: []
	W0926 18:05:37.712814    4572 logs.go:278] No container was found matching "storage-provisioner"
	I0926 18:05:37.712822    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:05:37.712828    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0926 18:05:37.751791    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:05:37.751825    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:05:37.756261    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:05:37.756270    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:05:37.790827    4572 logs.go:123] Gathering logs for kube-apiserver [6ed036197ac8] ...
	I0926 18:05:37.790843    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ed036197ac8"
	I0926 18:05:37.804535    4572 logs.go:123] Gathering logs for kube-scheduler [a39c8cf60874] ...
	I0926 18:05:37.804545    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a39c8cf60874"
	I0926 18:05:37.815996    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:05:37.816007    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:05:37.838076    4572 logs.go:123] Gathering logs for kube-scheduler [aaaef996b4e8] ...
	I0926 18:05:37.838084    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aaaef996b4e8"
	I0926 18:05:37.853198    4572 logs.go:123] Gathering logs for kube-controller-manager [025fbbdc414c] ...
	I0926 18:05:37.853212    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 025fbbdc414c"
	I0926 18:05:37.870916    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:05:37.870930    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:05:37.884244    4572 logs.go:123] Gathering logs for kube-apiserver [6707ec992f36] ...
	I0926 18:05:37.884261    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6707ec992f36"
	I0926 18:05:37.921227    4572 logs.go:123] Gathering logs for etcd [6389d9bb1ecd] ...
	I0926 18:05:37.921241    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6389d9bb1ecd"
	I0926 18:05:37.935714    4572 logs.go:123] Gathering logs for coredns [13d290387e07] ...
	I0926 18:05:37.935728    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13d290387e07"
	I0926 18:05:37.947794    4572 logs.go:123] Gathering logs for kube-proxy [10c5ead2a521] ...
	I0926 18:05:37.947808    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10c5ead2a521"
	I0926 18:05:37.959444    4572 logs.go:123] Gathering logs for etcd [42d8888f48e4] ...
	I0926 18:05:37.959461    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d8888f48e4"
	I0926 18:05:37.973471    4572 logs.go:123] Gathering logs for kube-controller-manager [240fdc9989e4] ...
	I0926 18:05:37.973486    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 240fdc9989e4"
	I0926 18:05:40.489219    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:45.491583    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:45.491670    4572 kubeadm.go:597] duration metric: took 4m3.136801625s to restartPrimaryControlPlane
	W0926 18:05:45.491733    4572 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0926 18:05:45.491760    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0926 18:05:46.448027    4572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0926 18:05:46.452908    4572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0926 18:05:46.455756    4572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0926 18:05:46.458990    4572 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0926 18:05:46.458998    4572 kubeadm.go:157] found existing configuration files:
	
	I0926 18:05:46.459038    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf
	I0926 18:05:46.461415    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0926 18:05:46.461445    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0926 18:05:46.464214    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf
	I0926 18:05:46.467070    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0926 18:05:46.467098    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0926 18:05:46.469662    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf
	I0926 18:05:46.472244    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0926 18:05:46.472276    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0926 18:05:46.475297    4572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf
	I0926 18:05:46.477675    4572 kubeadm.go:163] "https://control-plane.minikube.internal:50538" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50538 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0926 18:05:46.477701    4572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0926 18:05:46.480312    4572 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0926 18:05:46.497648    4572 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0926 18:05:46.497694    4572 kubeadm.go:310] [preflight] Running pre-flight checks
	I0926 18:05:46.555488    4572 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0926 18:05:46.555620    4572 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0926 18:05:46.555664    4572 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0926 18:05:46.605254    4572 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0926 18:05:46.609472    4572 out.go:235]   - Generating certificates and keys ...
	I0926 18:05:46.609507    4572 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0926 18:05:46.609562    4572 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0926 18:05:46.609607    4572 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0926 18:05:46.609737    4572 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0926 18:05:46.609815    4572 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0926 18:05:46.609845    4572 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0926 18:05:46.609885    4572 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0926 18:05:46.609915    4572 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0926 18:05:46.609949    4572 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0926 18:05:46.609985    4572 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0926 18:05:46.610005    4572 kubeadm.go:310] [certs] Using the existing "sa" key
	I0926 18:05:46.610030    4572 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0926 18:05:46.687430    4572 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0926 18:05:46.774785    4572 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0926 18:05:46.893289    4572 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0926 18:05:47.040080    4572 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0926 18:05:47.069356    4572 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0926 18:05:47.069884    4572 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0926 18:05:47.069932    4572 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0926 18:05:47.170283    4572 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0926 18:05:47.174502    4572 out.go:235]   - Booting up control plane ...
	I0926 18:05:47.174548    4572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0926 18:05:47.174591    4572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0926 18:05:47.174633    4572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0926 18:05:47.174672    4572 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0926 18:05:47.174757    4572 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0926 18:05:51.171572    4572 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.001255 seconds
	I0926 18:05:51.171632    4572 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0926 18:05:51.175268    4572 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0926 18:05:51.684622    4572 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0926 18:05:51.684775    4572 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-211000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0926 18:05:52.188099    4572 kubeadm.go:310] [bootstrap-token] Using token: kpqn1y.znfhhlvfvuxxug59
	I0926 18:05:52.192102    4572 out.go:235]   - Configuring RBAC rules ...
	I0926 18:05:52.192154    4572 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0926 18:05:52.192205    4572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0926 18:05:52.194132    4572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0926 18:05:52.199969    4572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0926 18:05:52.200916    4572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0926 18:05:52.201650    4572 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0926 18:05:52.206330    4572 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0926 18:05:52.388590    4572 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0926 18:05:52.592759    4572 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0926 18:05:52.593333    4572 kubeadm.go:310] 
	I0926 18:05:52.593412    4572 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0926 18:05:52.593416    4572 kubeadm.go:310] 
	I0926 18:05:52.593556    4572 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0926 18:05:52.593574    4572 kubeadm.go:310] 
	I0926 18:05:52.593607    4572 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0926 18:05:52.593677    4572 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0926 18:05:52.593710    4572 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0926 18:05:52.593716    4572 kubeadm.go:310] 
	I0926 18:05:52.593781    4572 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0926 18:05:52.593786    4572 kubeadm.go:310] 
	I0926 18:05:52.593817    4572 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0926 18:05:52.593820    4572 kubeadm.go:310] 
	I0926 18:05:52.593919    4572 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0926 18:05:52.594003    4572 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0926 18:05:52.594058    4572 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0926 18:05:52.594061    4572 kubeadm.go:310] 
	I0926 18:05:52.594213    4572 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0926 18:05:52.594252    4572 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0926 18:05:52.594256    4572 kubeadm.go:310] 
	I0926 18:05:52.594312    4572 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kpqn1y.znfhhlvfvuxxug59 \
	I0926 18:05:52.594386    4572 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fda44b3178e2a9a18cad0c3f133cc2773c24b77ff2472c5e9e47121699490a5 \
	I0926 18:05:52.594401    4572 kubeadm.go:310] 	--control-plane 
	I0926 18:05:52.594403    4572 kubeadm.go:310] 
	I0926 18:05:52.594454    4572 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0926 18:05:52.594461    4572 kubeadm.go:310] 
	I0926 18:05:52.594506    4572 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kpqn1y.znfhhlvfvuxxug59 \
	I0926 18:05:52.594570    4572 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3fda44b3178e2a9a18cad0c3f133cc2773c24b77ff2472c5e9e47121699490a5 
	I0926 18:05:52.594734    4572 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0926 18:05:52.594764    4572 cni.go:84] Creating CNI manager for ""
	I0926 18:05:52.594803    4572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:05:52.598645    4572 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0926 18:05:52.605787    4572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0926 18:05:52.608935    4572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0926 18:05:52.614777    4572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0926 18:05:52.614844    4572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0926 18:05:52.614903    4572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-211000 minikube.k8s.io/updated_at=2024_09_26T18_05_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=stopped-upgrade-211000 minikube.k8s.io/primary=true
	I0926 18:05:52.660032    4572 ops.go:34] apiserver oom_adj: -16
	I0926 18:05:52.660071    4572 kubeadm.go:1113] duration metric: took 45.290792ms to wait for elevateKubeSystemPrivileges
	I0926 18:05:52.660141    4572 kubeadm.go:394] duration metric: took 4m10.319511542s to StartCluster
	I0926 18:05:52.660153    4572 settings.go:142] acquiring lock: {Name:mk68436efc4e8fe170d744b4cebdb7ddef61f64c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:05:52.660241    4572 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:05:52.660642    4572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/kubeconfig: {Name:mk9560fb3377d007cf139de436457ca7aa0f8d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:05:52.660829    4572 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:05:52.660849    4572 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0926 18:05:52.660937    4572 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-211000"
	I0926 18:05:52.660947    4572 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-211000"
	W0926 18:05:52.660952    4572 addons.go:243] addon storage-provisioner should already be in state true
	I0926 18:05:52.660963    4572 host.go:66] Checking if "stopped-upgrade-211000" exists ...
	I0926 18:05:52.661055    4572 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:05:52.661047    4572 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-211000"
	I0926 18:05:52.661102    4572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-211000"
	I0926 18:05:52.661294    4572 retry.go:31] will retry after 1.260194994s: connect: dial unix /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/monitor: connect: connection refused
	I0926 18:05:52.664742    4572 out.go:177] * Verifying Kubernetes components...
	I0926 18:05:52.671620    4572 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0926 18:05:52.677750    4572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0926 18:05:52.683812    4572 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 18:05:52.683820    4572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0926 18:05:52.683828    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0926 18:05:52.762538    4572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0926 18:05:52.768045    4572 api_server.go:52] waiting for apiserver process to appear ...
	I0926 18:05:52.768112    4572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0926 18:05:52.774036    4572 api_server.go:72] duration metric: took 113.199542ms to wait for apiserver process to appear ...
	I0926 18:05:52.774044    4572 api_server.go:88] waiting for apiserver healthz status ...
	I0926 18:05:52.774053    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:05:52.778429    4572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0926 18:05:53.924448    4572 kapi.go:59] client config for stopped-upgrade-211000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/stopped-upgrade-211000/client.key", CAFile:"/Users/jenkins/minikube-integration/19711-1075/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060ce570), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0926 18:05:53.924592    4572 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-211000"
	W0926 18:05:53.924598    4572 addons.go:243] addon default-storageclass should already be in state true
	I0926 18:05:53.924611    4572 host.go:66] Checking if "stopped-upgrade-211000" exists ...
	I0926 18:05:53.925216    4572 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0926 18:05:53.925222    4572 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0926 18:05:53.925229    4572 sshutil.go:53] new ssh client: &{IP:localhost Port:50504 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/stopped-upgrade-211000/id_rsa Username:docker}
	I0926 18:05:53.962129    4572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0926 18:05:54.031334    4572 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0926 18:05:54.031345    4572 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0926 18:05:57.775994    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:05:57.776114    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:02.776759    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:02.776791    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:07.777071    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:07.777094    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:12.777480    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:12.777519    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:17.778131    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:17.778171    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:22.779007    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:22.779054    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0926 18:06:24.031687    4572 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0926 18:06:24.035909    4572 out.go:177] * Enabled addons: storage-provisioner
	I0926 18:06:24.043899    4572 addons.go:510] duration metric: took 31.384724125s for enable addons: enabled=[storage-provisioner]
	I0926 18:06:27.779133    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:27.779184    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:32.780459    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:32.780520    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:37.782198    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:37.782238    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:42.784199    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:42.784222    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:47.786089    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:47.786125    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:06:52.788082    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:06:52.788201    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:06:52.798937    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:06:52.799012    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:06:52.809861    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:06:52.809948    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:06:52.820796    4572 logs.go:276] 2 containers: [3b0777e7672e d962650ce184]
	I0926 18:06:52.820880    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:06:52.831604    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:06:52.831680    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:06:52.842566    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:06:52.842648    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:06:52.853603    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:06:52.853685    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:06:52.867763    4572 logs.go:276] 0 containers: []
	W0926 18:06:52.867775    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:06:52.867846    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:06:52.878434    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:06:52.878452    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:06:52.878458    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:06:52.889652    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:06:52.889663    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:06:52.903419    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:06:52.903429    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:06:52.942771    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:06:52.942782    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:06:52.957617    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:06:52.957627    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:06:52.972133    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:06:52.972143    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:06:52.983925    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:06:52.983938    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:06:52.999217    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:06:52.999233    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:06:53.016784    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:06:53.016793    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:06:53.028324    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:06:53.028334    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:06:53.052141    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:06:53.052148    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:06:53.086485    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:06:53.086580    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:06:53.087790    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:06:53.087795    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:06:53.092070    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:06:53.092077    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:06:53.110544    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:06:53.110567    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:06:53.110594    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:06:53.110599    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:06:53.110602    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:06:53.110605    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:06:53.110608    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:07:03.114287    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:07:08.116831    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:07:08.117008    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:07:08.136293    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:07:08.136389    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:07:08.150616    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:07:08.150702    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:07:08.162847    4572 logs.go:276] 2 containers: [3b0777e7672e d962650ce184]
	I0926 18:07:08.162926    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:07:08.173320    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:07:08.173397    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:07:08.183494    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:07:08.183573    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:07:08.193937    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:07:08.194009    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:07:08.203637    4572 logs.go:276] 0 containers: []
	W0926 18:07:08.203649    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:07:08.203704    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:07:08.213628    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:07:08.213644    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:07:08.213650    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:07:08.224782    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:07:08.224795    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:07:08.236360    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:07:08.236370    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:07:08.247590    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:07:08.247602    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:07:08.259708    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:07:08.259717    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:07:08.273909    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:07:08.273920    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:07:08.278458    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:07:08.278466    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:07:08.313038    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:07:08.313050    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:07:08.327094    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:07:08.327105    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:07:08.338335    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:07:08.338346    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:07:08.357968    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:07:08.357978    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:07:08.374929    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:07:08.374938    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:07:08.400153    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:07:08.400161    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:07:08.433360    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:07:08.433453    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:07:08.434596    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:07:08.434601    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:07:08.434626    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:07:08.434630    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:07:08.434634    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:07:08.434636    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:07:08.434639    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:07:18.438336    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:07:23.440796    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:07:23.441359    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:07:23.484088    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:07:23.484227    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:07:23.505379    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:07:23.505500    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:07:23.519978    4572 logs.go:276] 2 containers: [3b0777e7672e d962650ce184]
	I0926 18:07:23.520065    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:07:23.532471    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:07:23.532548    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:07:23.542447    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:07:23.542514    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:07:23.552976    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:07:23.553054    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:07:23.562724    4572 logs.go:276] 0 containers: []
	W0926 18:07:23.562737    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:07:23.562799    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:07:23.577549    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:07:23.577562    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:07:23.577567    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:07:23.618075    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:07:23.618087    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:07:23.632263    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:07:23.632274    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:07:23.647221    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:07:23.647232    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:07:23.658720    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:07:23.658730    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:07:23.681743    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:07:23.681751    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:07:23.693225    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:07:23.693240    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:07:23.726225    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:07:23.726317    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:07:23.727456    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:07:23.727462    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:07:23.739284    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:07:23.739296    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:07:23.750539    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:07:23.750549    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:07:23.764851    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:07:23.764862    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:07:23.783267    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:07:23.783277    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:07:23.794259    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:07:23.794274    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:07:23.798830    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:07:23.798841    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:07:23.798864    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:07:23.798870    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:07:23.798873    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:07:23.798877    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:07:23.798879    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:07:33.801523    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:07:38.804045    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:07:38.804538    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:07:38.840463    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:07:38.840616    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:07:38.861485    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:07:38.861595    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:07:38.876391    4572 logs.go:276] 2 containers: [3b0777e7672e d962650ce184]
	I0926 18:07:38.876476    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:07:38.888386    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:07:38.888468    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:07:38.898876    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:07:38.898958    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:07:38.908820    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:07:38.908897    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:07:38.919174    4572 logs.go:276] 0 containers: []
	W0926 18:07:38.919184    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:07:38.919251    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:07:38.929577    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:07:38.929591    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:07:38.929596    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:07:38.941562    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:07:38.941573    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:07:38.959588    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:07:38.959600    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:07:38.984563    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:07:38.984576    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:07:39.018874    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:07:39.018971    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:07:39.020102    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:07:39.020107    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:07:39.056022    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:07:39.056037    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:07:39.069836    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:07:39.069845    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:07:39.081479    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:07:39.081491    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:07:39.095977    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:07:39.095990    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:07:39.100258    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:07:39.100263    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:07:39.113824    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:07:39.113832    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:07:39.126094    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:07:39.126106    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:07:39.137249    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:07:39.137260    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:07:39.148705    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:07:39.148718    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:07:39.148744    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:07:39.148749    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:07:39.148753    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:07:39.148761    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:07:39.148764    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:07:49.152521    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:07:54.155890    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:07:54.156505    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:07:54.192693    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:07:54.192857    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:07:54.220041    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:07:54.220162    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:07:54.233389    4572 logs.go:276] 2 containers: [3b0777e7672e d962650ce184]
	I0926 18:07:54.233483    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:07:54.244830    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:07:54.244920    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:07:54.255377    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:07:54.255464    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:07:54.265845    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:07:54.265923    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:07:54.276410    4572 logs.go:276] 0 containers: []
	W0926 18:07:54.276421    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:07:54.276490    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:07:54.286889    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:07:54.286903    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:07:54.286907    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:07:54.322313    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:07:54.322330    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:07:54.340717    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:07:54.340728    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:07:54.353374    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:07:54.353385    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:07:54.365926    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:07:54.365937    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:07:54.377551    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:07:54.377561    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:07:54.395506    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:07:54.395517    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:07:54.427926    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:07:54.428018    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:07:54.429152    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:07:54.429156    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:07:54.433222    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:07:54.433229    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:07:54.444592    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:07:54.444604    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:07:54.458867    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:07:54.458878    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:07:54.482217    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:07:54.482227    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:07:54.496348    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:07:54.496358    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:07:54.510349    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:07:54.510359    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:07:54.510382    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:07:54.510387    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:07:54.510389    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:07:54.510405    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:07:54.510410    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:04.514020    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:08:09.516158    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:08:09.516609    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:08:09.559635    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:08:09.559792    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:08:09.580461    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:08:09.580568    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:08:09.595960    4572 logs.go:276] 4 containers: [97f7b82e37c5 922886c7e8c0 3b0777e7672e d962650ce184]
	I0926 18:08:09.596051    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:08:09.610832    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:08:09.610918    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:08:09.620922    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:08:09.620989    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:08:09.631578    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:08:09.631647    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:08:09.641995    4572 logs.go:276] 0 containers: []
	W0926 18:08:09.642007    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:08:09.642077    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:08:09.652529    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:08:09.652545    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:08:09.652551    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:08:09.664652    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:08:09.664662    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:08:09.678803    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:08:09.678814    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:08:09.690820    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:08:09.690830    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:08:09.705928    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:08:09.705938    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:08:09.730522    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:08:09.730528    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:08:09.744383    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:08:09.744394    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:08:09.758583    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:08:09.758597    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:08:09.780073    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:08:09.780086    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:08:09.813843    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:08:09.813936    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:08:09.815072    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:08:09.815077    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:08:09.826284    4572 logs.go:123] Gathering logs for coredns [922886c7e8c0] ...
	I0926 18:08:09.826294    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 922886c7e8c0"
	I0926 18:08:09.837333    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:08:09.837343    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:08:09.848674    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:08:09.848683    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:08:09.853193    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:08:09.853200    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:08:09.888281    4572 logs.go:123] Gathering logs for coredns [97f7b82e37c5] ...
	I0926 18:08:09.888290    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f7b82e37c5"
	I0926 18:08:09.900118    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:09.900128    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:08:09.900155    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:08:09.900160    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:08:09.900163    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:08:09.900177    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:09.900181    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:19.903818    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:08:24.905819    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:08:24.906415    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:08:24.942619    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:08:24.942763    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:08:24.965011    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:08:24.965121    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:08:24.984744    4572 logs.go:276] 4 containers: [97f7b82e37c5 922886c7e8c0 3b0777e7672e d962650ce184]
	I0926 18:08:24.984835    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:08:24.996495    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:08:24.996578    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:08:25.006934    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:08:25.007011    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:08:25.017796    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:08:25.017865    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:08:25.028186    4572 logs.go:276] 0 containers: []
	W0926 18:08:25.028196    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:08:25.028266    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:08:25.038320    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:08:25.038340    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:08:25.038347    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:08:25.052345    4572 logs.go:123] Gathering logs for coredns [922886c7e8c0] ...
	I0926 18:08:25.052354    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 922886c7e8c0"
	I0926 18:08:25.067881    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:08:25.067897    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:08:25.079882    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:08:25.079893    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:08:25.091636    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:08:25.091648    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:08:25.115486    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:08:25.115496    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:08:25.127136    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:08:25.127145    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:08:25.160977    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:08:25.161069    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:08:25.162201    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:08:25.162205    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:08:25.197536    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:08:25.197551    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:08:25.216010    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:08:25.216023    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:08:25.220631    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:08:25.220640    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:08:25.235127    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:08:25.235140    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:08:25.249655    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:08:25.249668    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:08:25.261365    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:08:25.261379    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:08:25.278602    4572 logs.go:123] Gathering logs for coredns [97f7b82e37c5] ...
	I0926 18:08:25.278616    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f7b82e37c5"
	I0926 18:08:25.290146    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:25.290156    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:08:25.290183    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:08:25.290187    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:08:25.290191    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:08:25.290196    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:25.290198    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:35.292515    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:08:40.294250    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:08:40.294394    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:08:40.309665    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:08:40.309755    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:08:40.322628    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:08:40.322715    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:08:40.341072    4572 logs.go:276] 4 containers: [97f7b82e37c5 922886c7e8c0 3b0777e7672e d962650ce184]
	I0926 18:08:40.341153    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:08:40.357172    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:08:40.357258    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:08:40.367640    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:08:40.367720    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:08:40.378377    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:08:40.378444    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:08:40.389167    4572 logs.go:276] 0 containers: []
	W0926 18:08:40.389178    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:08:40.389248    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:08:40.399385    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:08:40.399399    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:08:40.399404    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:08:40.411304    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:08:40.411316    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:08:40.422654    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:08:40.422663    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:08:40.459735    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:08:40.459744    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:08:40.471869    4572 logs.go:123] Gathering logs for coredns [97f7b82e37c5] ...
	I0926 18:08:40.471877    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f7b82e37c5"
	I0926 18:08:40.483476    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:08:40.483485    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:08:40.498484    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:08:40.498495    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:08:40.509952    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:08:40.509962    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:08:40.533748    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:08:40.533755    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:08:40.546571    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:08:40.546580    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:08:40.560907    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:08:40.560919    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:08:40.565144    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:08:40.565151    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:08:40.579781    4572 logs.go:123] Gathering logs for coredns [922886c7e8c0] ...
	I0926 18:08:40.579791    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 922886c7e8c0"
	I0926 18:08:40.591311    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:08:40.591321    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:08:40.610216    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:08:40.610226    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:08:40.644305    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:08:40.644397    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:08:40.645533    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:40.645538    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:08:40.645560    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:08:40.645563    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:08:40.645596    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:08:40.645600    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:40.645606    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:50.649176    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:08:55.651651    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:08:55.651776    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:08:55.666504    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:08:55.666583    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:08:55.678227    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:08:55.678295    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:08:55.690203    4572 logs.go:276] 4 containers: [97f7b82e37c5 922886c7e8c0 3b0777e7672e d962650ce184]
	I0926 18:08:55.690263    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:08:55.701624    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:08:55.701682    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:08:55.717175    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:08:55.717267    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:08:55.728180    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:08:55.728250    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:08:55.739595    4572 logs.go:276] 0 containers: []
	W0926 18:08:55.739610    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:08:55.739682    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:08:55.752423    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:08:55.752441    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:08:55.752446    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:08:55.775553    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:08:55.775568    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:08:55.799860    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:08:55.799873    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:08:55.804794    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:08:55.804807    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:08:55.821117    4572 logs.go:123] Gathering logs for coredns [922886c7e8c0] ...
	I0926 18:08:55.821128    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 922886c7e8c0"
	I0926 18:08:55.832929    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:08:55.832941    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:08:55.866454    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:08:55.866548    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:08:55.867723    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:08:55.867733    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:08:55.881280    4572 logs.go:123] Gathering logs for coredns [97f7b82e37c5] ...
	I0926 18:08:55.881292    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f7b82e37c5"
	I0926 18:08:55.895582    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:08:55.895594    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:08:55.922039    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:08:55.922051    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:08:55.938094    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:08:55.938107    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:08:55.951798    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:08:55.951810    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:08:55.970112    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:08:55.970125    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:08:56.008610    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:08:56.008620    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:08:56.028123    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:08:56.028136    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:08:56.040483    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:56.040494    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:08:56.040523    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:08:56.040528    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:08:56.040532    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:08:56.040536    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:56.040539    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:06.044146    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:09:11.046226    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:09:11.046769    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:09:11.107522    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:09:11.107646    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:09:11.123082    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:09:11.123165    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:09:11.135557    4572 logs.go:276] 4 containers: [97f7b82e37c5 922886c7e8c0 3b0777e7672e d962650ce184]
	I0926 18:09:11.135650    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:09:11.146331    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:09:11.146409    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:09:11.156983    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:09:11.157066    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:09:11.167851    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:09:11.167938    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:09:11.178170    4572 logs.go:276] 0 containers: []
	W0926 18:09:11.178183    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:09:11.178254    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:09:11.188569    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:09:11.188584    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:09:11.188589    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:09:11.207268    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:09:11.207279    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:09:11.218742    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:09:11.218751    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:09:11.229993    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:09:11.230002    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:09:11.245097    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:09:11.245109    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:09:11.257458    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:09:11.257468    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:09:11.268847    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:09:11.268856    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:09:11.299676    4572 logs.go:123] Gathering logs for coredns [922886c7e8c0] ...
	I0926 18:09:11.299686    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 922886c7e8c0"
	I0926 18:09:11.311744    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:09:11.311754    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:09:11.316005    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:09:11.316014    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:09:11.350661    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:09:11.350670    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:09:11.364433    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:09:11.364442    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:09:11.389640    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:09:11.389649    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:09:11.423004    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:09:11.423100    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:09:11.424233    4572 logs.go:123] Gathering logs for coredns [97f7b82e37c5] ...
	I0926 18:09:11.424237    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f7b82e37c5"
	I0926 18:09:11.435383    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:09:11.435390    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:09:11.447221    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:09:11.447231    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:09:11.447258    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:09:11.447263    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:09:11.447266    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:09:11.447269    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:09:11.447272    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:21.449619    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:09:26.451642    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:09:26.452215    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:09:26.495077    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:09:26.495217    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:09:26.514084    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:09:26.514190    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:09:26.528761    4572 logs.go:276] 4 containers: [97f7b82e37c5 922886c7e8c0 3b0777e7672e d962650ce184]
	I0926 18:09:26.528852    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:09:26.540312    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:09:26.540387    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:09:26.551198    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:09:26.551276    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:09:26.562056    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:09:26.562137    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:09:26.582003    4572 logs.go:276] 0 containers: []
	W0926 18:09:26.582014    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:09:26.582083    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:09:26.592678    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:09:26.592695    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:09:26.592700    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:09:26.607761    4572 logs.go:123] Gathering logs for coredns [97f7b82e37c5] ...
	I0926 18:09:26.607772    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f7b82e37c5"
	I0926 18:09:26.626245    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:09:26.626259    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:09:26.637627    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:09:26.637638    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:09:26.652473    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:09:26.652482    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:09:26.670430    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:09:26.670439    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:09:26.702726    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:09:26.702818    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:09:26.703951    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:09:26.703956    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:09:26.718184    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:09:26.718194    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:09:26.729458    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:09:26.729466    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:09:26.753656    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:09:26.753663    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:09:26.765453    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:09:26.765465    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:09:26.769586    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:09:26.769592    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:09:26.802796    4572 logs.go:123] Gathering logs for coredns [922886c7e8c0] ...
	I0926 18:09:26.802808    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 922886c7e8c0"
	I0926 18:09:26.814581    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:09:26.814592    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:09:26.829227    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:09:26.829239    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:09:26.841236    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:09:26.841249    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:09:26.841274    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:09:26.841280    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:09:26.841283    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:09:26.841287    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:09:26.841290    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:36.844987    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:09:41.847389    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:09:41.847948    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0926 18:09:41.888053    4572 logs.go:276] 1 containers: [69e20995260e]
	I0926 18:09:41.888211    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0926 18:09:41.910064    4572 logs.go:276] 1 containers: [4e0f8ef486fb]
	I0926 18:09:41.910194    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0926 18:09:41.926311    4572 logs.go:276] 4 containers: [97f7b82e37c5 922886c7e8c0 3b0777e7672e d962650ce184]
	I0926 18:09:41.926407    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0926 18:09:41.939103    4572 logs.go:276] 1 containers: [670a92dde374]
	I0926 18:09:41.939186    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0926 18:09:41.950474    4572 logs.go:276] 1 containers: [7113792ccc75]
	I0926 18:09:41.950550    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0926 18:09:41.961362    4572 logs.go:276] 1 containers: [07ca18ef8dfa]
	I0926 18:09:41.961441    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0926 18:09:41.971738    4572 logs.go:276] 0 containers: []
	W0926 18:09:41.971751    4572 logs.go:278] No container was found matching "kindnet"
	I0926 18:09:41.971824    4572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0926 18:09:41.982640    4572 logs.go:276] 1 containers: [8c05df5faa5b]
	I0926 18:09:41.982661    4572 logs.go:123] Gathering logs for kube-controller-manager [07ca18ef8dfa] ...
	I0926 18:09:41.982666    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07ca18ef8dfa"
	I0926 18:09:42.000294    4572 logs.go:123] Gathering logs for coredns [3b0777e7672e] ...
	I0926 18:09:42.000303    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b0777e7672e"
	I0926 18:09:42.017013    4572 logs.go:123] Gathering logs for kube-scheduler [670a92dde374] ...
	I0926 18:09:42.017023    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 670a92dde374"
	I0926 18:09:42.033698    4572 logs.go:123] Gathering logs for kubelet ...
	I0926 18:09:42.033708    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0926 18:09:42.065789    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:09:42.065882    4572 logs.go:138] Found kubelet problem: Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:09:42.067014    4572 logs.go:123] Gathering logs for kube-proxy [7113792ccc75] ...
	I0926 18:09:42.067018    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7113792ccc75"
	I0926 18:09:42.078856    4572 logs.go:123] Gathering logs for coredns [97f7b82e37c5] ...
	I0926 18:09:42.078866    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97f7b82e37c5"
	I0926 18:09:42.090698    4572 logs.go:123] Gathering logs for coredns [922886c7e8c0] ...
	I0926 18:09:42.090707    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 922886c7e8c0"
	I0926 18:09:42.102623    4572 logs.go:123] Gathering logs for storage-provisioner [8c05df5faa5b] ...
	I0926 18:09:42.102632    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c05df5faa5b"
	I0926 18:09:42.113975    4572 logs.go:123] Gathering logs for Docker ...
	I0926 18:09:42.113984    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0926 18:09:42.137756    4572 logs.go:123] Gathering logs for kube-apiserver [69e20995260e] ...
	I0926 18:09:42.137765    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69e20995260e"
	I0926 18:09:42.153178    4572 logs.go:123] Gathering logs for etcd [4e0f8ef486fb] ...
	I0926 18:09:42.153189    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e0f8ef486fb"
	I0926 18:09:42.166728    4572 logs.go:123] Gathering logs for coredns [d962650ce184] ...
	I0926 18:09:42.166738    4572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d962650ce184"
	I0926 18:09:42.178594    4572 logs.go:123] Gathering logs for container status ...
	I0926 18:09:42.178606    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0926 18:09:42.190328    4572 logs.go:123] Gathering logs for dmesg ...
	I0926 18:09:42.190340    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0926 18:09:42.196961    4572 logs.go:123] Gathering logs for describe nodes ...
	I0926 18:09:42.196975    4572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0926 18:09:42.246231    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:09:42.246243    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0926 18:09:42.246271    4572 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0926 18:09:42.246277    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: W0927 01:06:06.023679    9770 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	W0926 18:09:42.246281    4572 out.go:270]   Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	  Sep 27 01:06:06 stopped-upgrade-211000 kubelet[9770]: E0927 01:06:06.023708    9770 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-211000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-211000' and this object
	I0926 18:09:42.246338    4572 out.go:358] Setting ErrFile to fd 2...
	I0926 18:09:42.246343    4572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:52.249885    4572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0926 18:09:57.252366    4572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0926 18:09:57.256176    4572 out.go:201] 
	W0926 18:09:57.260123    4572 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0926 18:09:57.260145    4572 out.go:270] * 
	* 
	W0926 18:09:57.260816    4572 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:09:57.268128    4572 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-211000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (576.45s)

                                                
                                    
x
+
TestPause/serial/Start (9.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-662000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-662000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.958167291s)

                                                
                                                
-- stdout --
	* [pause-662000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-662000" primary control-plane node in "pause-662000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-662000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-662000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-662000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-662000 -n pause-662000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-662000 -n pause-662000: exit status 7 (34.116375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-662000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-843000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-843000 --driver=qemu2 : exit status 80 (9.845561625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-843000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-843000" primary control-plane node in "NoKubernetes-843000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-843000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-843000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-843000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-843000 -n NoKubernetes-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-843000 -n NoKubernetes-843000: exit status 7 (43.793084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-843000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-843000 --no-kubernetes --driver=qemu2 : exit status 80 (5.243004875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-843000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-843000
	* Restarting existing qemu2 VM for "NoKubernetes-843000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-843000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-843000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-843000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-843000 -n NoKubernetes-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-843000 -n NoKubernetes-843000: exit status 7 (55.550084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-843000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-843000 --no-kubernetes --driver=qemu2 : exit status 80 (5.2429265s)

                                                
                                                
-- stdout --
	* [NoKubernetes-843000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-843000
	* Restarting existing qemu2 VM for "NoKubernetes-843000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-843000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-843000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-843000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-843000 -n NoKubernetes-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-843000 -n NoKubernetes-843000: exit status 7 (60.3615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-843000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-843000 --driver=qemu2 : exit status 80 (5.273280083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-843000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-843000
	* Restarting existing qemu2 VM for "NoKubernetes-843000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-843000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-843000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-843000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-843000 -n NoKubernetes-843000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-843000 -n NoKubernetes-843000: exit status 7 (54.879375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-843000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.836840291s)

                                                
                                                
-- stdout --
	* [auto-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-790000" primary control-plane node in "auto-790000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:08:08.364315    4798 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:08:08.364571    4798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:08.364575    4798 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:08.364577    4798 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:08.364707    4798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:08:08.366020    4798 out.go:352] Setting JSON to false
	I0926 18:08:08.382546    4798 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4051,"bootTime":1727395237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:08:08.382614    4798 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:08:08.390365    4798 out.go:177] * [auto-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:08:08.397321    4798 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:08:08.397351    4798 notify.go:220] Checking for updates...
	I0926 18:08:08.403218    4798 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:08:08.406317    4798 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:08:08.409303    4798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:08:08.412279    4798 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:08:08.415245    4798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:08:08.418660    4798 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:08:08.418721    4798 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:08:08.418769    4798 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:08:08.423244    4798 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:08:08.430288    4798 start.go:297] selected driver: qemu2
	I0926 18:08:08.430296    4798 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:08:08.430303    4798 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:08:08.432592    4798 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:08:08.436899    4798 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:08:08.440998    4798 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:08:08.441013    4798 cni.go:84] Creating CNI manager for ""
	I0926 18:08:08.441039    4798 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:08:08.441043    4798 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 18:08:08.441072    4798 start.go:340] cluster config:
	{Name:auto-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:08:08.444596    4798 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:08:08.452233    4798 out.go:177] * Starting "auto-790000" primary control-plane node in "auto-790000" cluster
	I0926 18:08:08.456195    4798 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:08:08.456212    4798 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:08:08.456221    4798 cache.go:56] Caching tarball of preloaded images
	I0926 18:08:08.456291    4798 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:08:08.456297    4798 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:08:08.456353    4798 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/auto-790000/config.json ...
	I0926 18:08:08.456363    4798 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/auto-790000/config.json: {Name:mka915c666f57100e17565dc5cdb83a528ccde14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:08:08.456685    4798 start.go:360] acquireMachinesLock for auto-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:08:08.456725    4798 start.go:364] duration metric: took 33.291µs to acquireMachinesLock for "auto-790000"
	I0926 18:08:08.456737    4798 start.go:93] Provisioning new machine with config: &{Name:auto-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:08:08.456763    4798 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:08:08.465261    4798 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:08:08.480683    4798 start.go:159] libmachine.API.Create for "auto-790000" (driver="qemu2")
	I0926 18:08:08.480715    4798 client.go:168] LocalClient.Create starting
	I0926 18:08:08.480784    4798 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:08:08.480813    4798 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:08.480822    4798 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:08.480863    4798 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:08:08.480887    4798 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:08.480897    4798 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:08.481332    4798 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:08:08.640202    4798 main.go:141] libmachine: Creating SSH key...
	I0926 18:08:08.740909    4798 main.go:141] libmachine: Creating Disk image...
	I0926 18:08:08.740919    4798 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:08:08.741109    4798 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2
	I0926 18:08:08.750459    4798 main.go:141] libmachine: STDOUT: 
	I0926 18:08:08.750478    4798 main.go:141] libmachine: STDERR: 
	I0926 18:08:08.750537    4798 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2 +20000M
	I0926 18:08:08.758757    4798 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:08:08.758784    4798 main.go:141] libmachine: STDERR: 
	I0926 18:08:08.758804    4798 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2
	I0926 18:08:08.758810    4798 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:08:08.758823    4798 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:08:08.758849    4798 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:54:04:2a:40:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2
	I0926 18:08:08.760563    4798 main.go:141] libmachine: STDOUT: 
	I0926 18:08:08.760577    4798 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:08:08.760598    4798 client.go:171] duration metric: took 279.891709ms to LocalClient.Create
	I0926 18:08:10.762741    4798 start.go:128] duration metric: took 2.306068708s to createHost
	I0926 18:08:10.762816    4798 start.go:83] releasing machines lock for "auto-790000", held for 2.306204292s
	W0926 18:08:10.762912    4798 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:10.770303    4798 out.go:177] * Deleting "auto-790000" in qemu2 ...
	W0926 18:08:10.806758    4798 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:10.806789    4798 start.go:729] Will try again in 5 seconds ...
	I0926 18:08:15.808700    4798 start.go:360] acquireMachinesLock for auto-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:08:15.809383    4798 start.go:364] duration metric: took 589.041µs to acquireMachinesLock for "auto-790000"
	I0926 18:08:15.809525    4798 start.go:93] Provisioning new machine with config: &{Name:auto-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:08:15.809830    4798 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:08:15.816462    4798 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:08:15.866290    4798 start.go:159] libmachine.API.Create for "auto-790000" (driver="qemu2")
	I0926 18:08:15.866353    4798 client.go:168] LocalClient.Create starting
	I0926 18:08:15.866506    4798 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:08:15.866586    4798 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:15.866600    4798 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:15.866677    4798 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:08:15.866726    4798 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:15.866752    4798 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:15.867312    4798 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:08:16.034032    4798 main.go:141] libmachine: Creating SSH key...
	I0926 18:08:16.106555    4798 main.go:141] libmachine: Creating Disk image...
	I0926 18:08:16.106562    4798 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:08:16.106751    4798 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2
	I0926 18:08:16.116292    4798 main.go:141] libmachine: STDOUT: 
	I0926 18:08:16.116308    4798 main.go:141] libmachine: STDERR: 
	I0926 18:08:16.116366    4798 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2 +20000M
	I0926 18:08:16.124342    4798 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:08:16.124376    4798 main.go:141] libmachine: STDERR: 
	I0926 18:08:16.124396    4798 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2
	I0926 18:08:16.124405    4798 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:08:16.124415    4798 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:08:16.124444    4798 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:37:a1:f9:fe:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/auto-790000/disk.qcow2
	I0926 18:08:16.126222    4798 main.go:141] libmachine: STDOUT: 
	I0926 18:08:16.126237    4798 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:08:16.126251    4798 client.go:171] duration metric: took 259.897083ms to LocalClient.Create
	I0926 18:08:18.128350    4798 start.go:128] duration metric: took 2.318600042s to createHost
	I0926 18:08:18.128430    4798 start.go:83] releasing machines lock for "auto-790000", held for 2.31914375s
	W0926 18:08:18.128883    4798 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:18.137339    4798 out.go:201] 
	W0926 18:08:18.147494    4798 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:08:18.147535    4798 out.go:270] * 
	* 
	W0926 18:08:18.150227    4798 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:08:18.160426    4798 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.712856s)

                                                
                                                
-- stdout --
	* [kindnet-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-790000" primary control-plane node in "kindnet-790000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:08:20.329474    4907 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:08:20.329591    4907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:20.329594    4907 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:20.329597    4907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:20.329714    4907 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:08:20.330785    4907 out.go:352] Setting JSON to false
	I0926 18:08:20.346739    4907 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4063,"bootTime":1727395237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:08:20.346835    4907 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:08:20.352618    4907 out.go:177] * [kindnet-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:08:20.360439    4907 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:08:20.360460    4907 notify.go:220] Checking for updates...
	I0926 18:08:20.367438    4907 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:08:20.371141    4907 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:08:20.375656    4907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:08:20.378431    4907 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:08:20.381433    4907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:08:20.384808    4907 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:08:20.384868    4907 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:08:20.384913    4907 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:08:20.389393    4907 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:08:20.396505    4907 start.go:297] selected driver: qemu2
	I0926 18:08:20.396513    4907 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:08:20.396520    4907 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:08:20.398634    4907 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:08:20.401390    4907 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:08:20.404507    4907 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:08:20.404525    4907 cni.go:84] Creating CNI manager for "kindnet"
	I0926 18:08:20.404532    4907 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0926 18:08:20.404562    4907 start.go:340] cluster config:
	{Name:kindnet-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:08:20.407809    4907 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:08:20.413443    4907 out.go:177] * Starting "kindnet-790000" primary control-plane node in "kindnet-790000" cluster
	I0926 18:08:20.417408    4907 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:08:20.417423    4907 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:08:20.417429    4907 cache.go:56] Caching tarball of preloaded images
	I0926 18:08:20.417484    4907 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:08:20.417489    4907 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:08:20.417536    4907 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/kindnet-790000/config.json ...
	I0926 18:08:20.417546    4907 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/kindnet-790000/config.json: {Name:mke8b68e5fe932065ab98449687c167f9e521101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:08:20.417746    4907 start.go:360] acquireMachinesLock for kindnet-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:08:20.417776    4907 start.go:364] duration metric: took 25.208µs to acquireMachinesLock for "kindnet-790000"
	I0926 18:08:20.417787    4907 start.go:93] Provisioning new machine with config: &{Name:kindnet-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:08:20.417819    4907 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:08:20.426446    4907 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:08:20.441645    4907 start.go:159] libmachine.API.Create for "kindnet-790000" (driver="qemu2")
	I0926 18:08:20.441680    4907 client.go:168] LocalClient.Create starting
	I0926 18:08:20.441752    4907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:08:20.441785    4907 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:20.441795    4907 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:20.441839    4907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:08:20.441862    4907 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:20.441869    4907 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:20.442204    4907 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:08:20.599912    4907 main.go:141] libmachine: Creating SSH key...
	I0926 18:08:20.634390    4907 main.go:141] libmachine: Creating Disk image...
	I0926 18:08:20.634399    4907 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:08:20.634593    4907 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2
	I0926 18:08:20.643844    4907 main.go:141] libmachine: STDOUT: 
	I0926 18:08:20.643860    4907 main.go:141] libmachine: STDERR: 
	I0926 18:08:20.643914    4907 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2 +20000M
	I0926 18:08:20.651758    4907 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:08:20.651775    4907 main.go:141] libmachine: STDERR: 
	I0926 18:08:20.651791    4907 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2
	I0926 18:08:20.651797    4907 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:08:20.651809    4907 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:08:20.651836    4907 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:e5:53:db:a0:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2
	I0926 18:08:20.653525    4907 main.go:141] libmachine: STDOUT: 
	I0926 18:08:20.653539    4907 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:08:20.653561    4907 client.go:171] duration metric: took 211.8865ms to LocalClient.Create
	I0926 18:08:22.655621    4907 start.go:128] duration metric: took 2.2378945s to createHost
	I0926 18:08:22.655662    4907 start.go:83] releasing machines lock for "kindnet-790000", held for 2.237999625s
	W0926 18:08:22.655694    4907 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:22.674496    4907 out.go:177] * Deleting "kindnet-790000" in qemu2 ...
	W0926 18:08:22.696412    4907 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:22.696426    4907 start.go:729] Will try again in 5 seconds ...
	I0926 18:08:27.698329    4907 start.go:360] acquireMachinesLock for kindnet-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:08:27.698871    4907 start.go:364] duration metric: took 455.208µs to acquireMachinesLock for "kindnet-790000"
	I0926 18:08:27.698993    4907 start.go:93] Provisioning new machine with config: &{Name:kindnet-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:08:27.699275    4907 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:08:27.706803    4907 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:08:27.744324    4907 start.go:159] libmachine.API.Create for "kindnet-790000" (driver="qemu2")
	I0926 18:08:27.744382    4907 client.go:168] LocalClient.Create starting
	I0926 18:08:27.744486    4907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:08:27.744557    4907 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:27.744569    4907 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:27.744631    4907 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:08:27.744668    4907 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:27.744679    4907 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:27.745147    4907 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:08:27.910142    4907 main.go:141] libmachine: Creating SSH key...
	I0926 18:08:27.947482    4907 main.go:141] libmachine: Creating Disk image...
	I0926 18:08:27.947489    4907 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:08:27.947671    4907 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2
	I0926 18:08:27.957284    4907 main.go:141] libmachine: STDOUT: 
	I0926 18:08:27.957303    4907 main.go:141] libmachine: STDERR: 
	I0926 18:08:27.957363    4907 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2 +20000M
	I0926 18:08:27.965599    4907 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:08:27.965613    4907 main.go:141] libmachine: STDERR: 
	I0926 18:08:27.965626    4907 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2
	I0926 18:08:27.965630    4907 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:08:27.965638    4907 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:08:27.965669    4907 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:54:b3:ef:5f:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kindnet-790000/disk.qcow2
	I0926 18:08:27.967316    4907 main.go:141] libmachine: STDOUT: 
	I0926 18:08:27.967337    4907 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:08:27.967350    4907 client.go:171] duration metric: took 222.974833ms to LocalClient.Create
	I0926 18:08:29.969455    4907 start.go:128] duration metric: took 2.270265834s to createHost
	I0926 18:08:29.969536    4907 start.go:83] releasing machines lock for "kindnet-790000", held for 2.270766625s
	W0926 18:08:29.969981    4907 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:29.978700    4907 out.go:201] 
	W0926 18:08:29.988877    4907 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:08:29.988913    4907 out.go:270] * 
	* 
	W0926 18:08:29.991326    4907 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:08:30.004655    4907 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.004577708s)

                                                
                                                
-- stdout --
	* [calico-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-790000" primary control-plane node in "calico-790000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:08:32.269502    5020 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:08:32.269631    5020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:32.269634    5020 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:32.269636    5020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:32.269766    5020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:08:32.270898    5020 out.go:352] Setting JSON to false
	I0926 18:08:32.287380    5020 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4075,"bootTime":1727395237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:08:32.287454    5020 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:08:32.295458    5020 out.go:177] * [calico-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:08:32.304254    5020 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:08:32.304315    5020 notify.go:220] Checking for updates...
	I0926 18:08:32.308796    5020 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:08:32.312215    5020 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:08:32.315256    5020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:08:32.323130    5020 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:08:32.326301    5020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:08:32.329576    5020 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:08:32.329647    5020 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:08:32.329697    5020 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:08:32.334116    5020 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:08:32.341231    5020 start.go:297] selected driver: qemu2
	I0926 18:08:32.341237    5020 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:08:32.341243    5020 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:08:32.343719    5020 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:08:32.346266    5020 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:08:32.349268    5020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:08:32.349286    5020 cni.go:84] Creating CNI manager for "calico"
	I0926 18:08:32.349290    5020 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0926 18:08:32.349324    5020 start.go:340] cluster config:
	{Name:calico-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:08:32.353123    5020 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:08:32.361190    5020 out.go:177] * Starting "calico-790000" primary control-plane node in "calico-790000" cluster
	I0926 18:08:32.365211    5020 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:08:32.365225    5020 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:08:32.365233    5020 cache.go:56] Caching tarball of preloaded images
	I0926 18:08:32.365288    5020 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:08:32.365294    5020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:08:32.365349    5020 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/calico-790000/config.json ...
	I0926 18:08:32.365361    5020 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/calico-790000/config.json: {Name:mkf874a89ce0bed83b2781c30bddfc9b70306d19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:08:32.365815    5020 start.go:360] acquireMachinesLock for calico-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:08:32.365850    5020 start.go:364] duration metric: took 29.208µs to acquireMachinesLock for "calico-790000"
	I0926 18:08:32.365862    5020 start.go:93] Provisioning new machine with config: &{Name:calico-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:08:32.365888    5020 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:08:32.374240    5020 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:08:32.390307    5020 start.go:159] libmachine.API.Create for "calico-790000" (driver="qemu2")
	I0926 18:08:32.390341    5020 client.go:168] LocalClient.Create starting
	I0926 18:08:32.390409    5020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:08:32.390441    5020 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:32.390450    5020 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:32.390490    5020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:08:32.390513    5020 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:32.390531    5020 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:32.391032    5020 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:08:32.549451    5020 main.go:141] libmachine: Creating SSH key...
	I0926 18:08:32.598543    5020 main.go:141] libmachine: Creating Disk image...
	I0926 18:08:32.598548    5020 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:08:32.598726    5020 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2
	I0926 18:08:32.608419    5020 main.go:141] libmachine: STDOUT: 
	I0926 18:08:32.608440    5020 main.go:141] libmachine: STDERR: 
	I0926 18:08:32.608518    5020 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2 +20000M
	I0926 18:08:32.616639    5020 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:08:32.616656    5020 main.go:141] libmachine: STDERR: 
	I0926 18:08:32.616678    5020 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2
	I0926 18:08:32.616685    5020 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:08:32.616694    5020 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:08:32.616723    5020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:fa:dd:7a:e9:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2
	I0926 18:08:32.618383    5020 main.go:141] libmachine: STDOUT: 
	I0926 18:08:32.618397    5020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:08:32.618417    5020 client.go:171] duration metric: took 228.0805ms to LocalClient.Create
	I0926 18:08:34.620520    5020 start.go:128] duration metric: took 2.254678709s to createHost
	I0926 18:08:34.620656    5020 start.go:83] releasing machines lock for "calico-790000", held for 2.254900917s
	W0926 18:08:34.620721    5020 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:34.626489    5020 out.go:177] * Deleting "calico-790000" in qemu2 ...
	W0926 18:08:34.656844    5020 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:34.656870    5020 start.go:729] Will try again in 5 seconds ...
	I0926 18:08:39.658786    5020 start.go:360] acquireMachinesLock for calico-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:08:39.659352    5020 start.go:364] duration metric: took 483.625µs to acquireMachinesLock for "calico-790000"
	I0926 18:08:39.659498    5020 start.go:93] Provisioning new machine with config: &{Name:calico-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:08:39.659818    5020 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:08:39.667294    5020 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:08:39.719124    5020 start.go:159] libmachine.API.Create for "calico-790000" (driver="qemu2")
	I0926 18:08:39.719179    5020 client.go:168] LocalClient.Create starting
	I0926 18:08:39.719310    5020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:08:39.719386    5020 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:39.719408    5020 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:39.719488    5020 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:08:39.719534    5020 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:39.719550    5020 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:39.720119    5020 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:08:39.896107    5020 main.go:141] libmachine: Creating SSH key...
	I0926 18:08:40.182511    5020 main.go:141] libmachine: Creating Disk image...
	I0926 18:08:40.182524    5020 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:08:40.182792    5020 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2
	I0926 18:08:40.192944    5020 main.go:141] libmachine: STDOUT: 
	I0926 18:08:40.192967    5020 main.go:141] libmachine: STDERR: 
	I0926 18:08:40.193034    5020 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2 +20000M
	I0926 18:08:40.201277    5020 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:08:40.201294    5020 main.go:141] libmachine: STDERR: 
	I0926 18:08:40.201307    5020 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2
	I0926 18:08:40.201312    5020 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:08:40.201328    5020 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:08:40.201357    5020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:56:79:e8:b1:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/calico-790000/disk.qcow2
	I0926 18:08:40.203075    5020 main.go:141] libmachine: STDOUT: 
	I0926 18:08:40.203095    5020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:08:40.203107    5020 client.go:171] duration metric: took 483.949291ms to LocalClient.Create
	I0926 18:08:42.205210    5020 start.go:128] duration metric: took 2.545488208s to createHost
	I0926 18:08:42.205398    5020 start.go:83] releasing machines lock for "calico-790000", held for 2.54613475s
	W0926 18:08:42.205736    5020 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:42.215559    5020 out.go:201] 
	W0926 18:08:42.219552    5020 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:08:42.219572    5020 out.go:270] * 
	* 
	W0926 18:08:42.221962    5020 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:08:42.231530    5020 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.893639042s)

                                                
                                                
-- stdout --
	* [custom-flannel-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-790000" primary control-plane node in "custom-flannel-790000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:08:44.667929    5138 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:08:44.668092    5138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:44.668096    5138 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:44.668098    5138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:44.668253    5138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:08:44.669341    5138 out.go:352] Setting JSON to false
	I0926 18:08:44.685590    5138 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4087,"bootTime":1727395237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:08:44.685664    5138 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:08:44.691792    5138 out.go:177] * [custom-flannel-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:08:44.699692    5138 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:08:44.699732    5138 notify.go:220] Checking for updates...
	I0926 18:08:44.707691    5138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:08:44.710659    5138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:08:44.713653    5138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:08:44.717446    5138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:08:44.721467    5138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:08:44.725210    5138 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:08:44.725277    5138 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:08:44.725325    5138 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:08:44.729666    5138 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:08:44.736659    5138 start.go:297] selected driver: qemu2
	I0926 18:08:44.736666    5138 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:08:44.736677    5138 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:08:44.739117    5138 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:08:44.741658    5138 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:08:44.744728    5138 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:08:44.744760    5138 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0926 18:08:44.744769    5138 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0926 18:08:44.744804    5138 start.go:340] cluster config:
	{Name:custom-flannel-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:08:44.748522    5138 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:08:44.755693    5138 out.go:177] * Starting "custom-flannel-790000" primary control-plane node in "custom-flannel-790000" cluster
	I0926 18:08:44.759673    5138 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:08:44.759688    5138 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:08:44.759696    5138 cache.go:56] Caching tarball of preloaded images
	I0926 18:08:44.759753    5138 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:08:44.759759    5138 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:08:44.759816    5138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/custom-flannel-790000/config.json ...
	I0926 18:08:44.759827    5138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/custom-flannel-790000/config.json: {Name:mk24e9f7ccab10d7eef7e31d827e14396c6bdb5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:08:44.760261    5138 start.go:360] acquireMachinesLock for custom-flannel-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:08:44.760298    5138 start.go:364] duration metric: took 29.291µs to acquireMachinesLock for "custom-flannel-790000"
	I0926 18:08:44.760310    5138 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:08:44.760343    5138 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:08:44.768649    5138 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:08:44.785570    5138 start.go:159] libmachine.API.Create for "custom-flannel-790000" (driver="qemu2")
	I0926 18:08:44.785601    5138 client.go:168] LocalClient.Create starting
	I0926 18:08:44.785678    5138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:08:44.785707    5138 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:44.785715    5138 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:44.785755    5138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:08:44.785778    5138 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:44.785786    5138 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:44.786183    5138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:08:44.948621    5138 main.go:141] libmachine: Creating SSH key...
	I0926 18:08:45.105815    5138 main.go:141] libmachine: Creating Disk image...
	I0926 18:08:45.105823    5138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:08:45.106037    5138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2
	I0926 18:08:45.115821    5138 main.go:141] libmachine: STDOUT: 
	I0926 18:08:45.115850    5138 main.go:141] libmachine: STDERR: 
	I0926 18:08:45.115913    5138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2 +20000M
	I0926 18:08:45.123825    5138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:08:45.123839    5138 main.go:141] libmachine: STDERR: 
	I0926 18:08:45.123859    5138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2
	I0926 18:08:45.123863    5138 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:08:45.123875    5138 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:08:45.123902    5138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:e4:10:4e:7a:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2
	I0926 18:08:45.125524    5138 main.go:141] libmachine: STDOUT: 
	I0926 18:08:45.125537    5138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:08:45.125555    5138 client.go:171] duration metric: took 339.966333ms to LocalClient.Create
	I0926 18:08:47.127671    5138 start.go:128] duration metric: took 2.367421833s to createHost
	I0926 18:08:47.127752    5138 start.go:83] releasing machines lock for "custom-flannel-790000", held for 2.367568625s
	W0926 18:08:47.127880    5138 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:47.142222    5138 out.go:177] * Deleting "custom-flannel-790000" in qemu2 ...
	W0926 18:08:47.176566    5138 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:47.176594    5138 start.go:729] Will try again in 5 seconds ...
	I0926 18:08:52.178522    5138 start.go:360] acquireMachinesLock for custom-flannel-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:08:52.179017    5138 start.go:364] duration metric: took 411.834µs to acquireMachinesLock for "custom-flannel-790000"
	I0926 18:08:52.179150    5138 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:08:52.179567    5138 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:08:52.198245    5138 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:08:52.248244    5138 start.go:159] libmachine.API.Create for "custom-flannel-790000" (driver="qemu2")
	I0926 18:08:52.248306    5138 client.go:168] LocalClient.Create starting
	I0926 18:08:52.248410    5138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:08:52.248482    5138 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:52.248495    5138 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:52.248568    5138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:08:52.248606    5138 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:52.248616    5138 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:52.249138    5138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:08:52.415547    5138 main.go:141] libmachine: Creating SSH key...
	I0926 18:08:52.474344    5138 main.go:141] libmachine: Creating Disk image...
	I0926 18:08:52.474353    5138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:08:52.474559    5138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2
	I0926 18:08:52.484007    5138 main.go:141] libmachine: STDOUT: 
	I0926 18:08:52.484024    5138 main.go:141] libmachine: STDERR: 
	I0926 18:08:52.484099    5138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2 +20000M
	I0926 18:08:52.492089    5138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:08:52.492118    5138 main.go:141] libmachine: STDERR: 
	I0926 18:08:52.492135    5138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2
	I0926 18:08:52.492140    5138 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:08:52.492150    5138 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:08:52.492181    5138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:87:f1:12:3c:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/custom-flannel-790000/disk.qcow2
	I0926 18:08:52.493971    5138 main.go:141] libmachine: STDOUT: 
	I0926 18:08:52.493991    5138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:08:52.494005    5138 client.go:171] duration metric: took 245.707459ms to LocalClient.Create
	I0926 18:08:54.496083    5138 start.go:128] duration metric: took 2.316608084s to createHost
	I0926 18:08:54.496145    5138 start.go:83] releasing machines lock for "custom-flannel-790000", held for 2.31723425s
	W0926 18:08:54.496494    5138 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:54.504000    5138 out.go:201] 
	W0926 18:08:54.507242    5138 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:08:54.507285    5138 out.go:270] * 
	* 
	W0926 18:08:54.509166    5138 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:08:54.520195    5138 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.889159542s)

                                                
                                                
-- stdout --
	* [false-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-790000" primary control-plane node in "false-790000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:08:56.957200    5258 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:08:56.957348    5258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:56.957351    5258 out.go:358] Setting ErrFile to fd 2...
	I0926 18:08:56.957354    5258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:08:56.957503    5258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:08:56.958515    5258 out.go:352] Setting JSON to false
	I0926 18:08:56.975416    5258 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4099,"bootTime":1727395237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:08:56.975504    5258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:08:56.983788    5258 out.go:177] * [false-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:08:56.991596    5258 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:08:56.991625    5258 notify.go:220] Checking for updates...
	I0926 18:08:56.998497    5258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:08:57.001542    5258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:08:57.004592    5258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:08:57.007549    5258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:08:57.010530    5258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:08:57.013961    5258 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:08:57.014023    5258 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:08:57.014077    5258 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:08:57.018531    5258 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:08:57.025639    5258 start.go:297] selected driver: qemu2
	I0926 18:08:57.025648    5258 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:08:57.025656    5258 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:08:57.027947    5258 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:08:57.031585    5258 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:08:57.034612    5258 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:08:57.034628    5258 cni.go:84] Creating CNI manager for "false"
	I0926 18:08:57.034655    5258 start.go:340] cluster config:
	{Name:false-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:08:57.038294    5258 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:08:57.046552    5258 out.go:177] * Starting "false-790000" primary control-plane node in "false-790000" cluster
	I0926 18:08:57.050600    5258 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:08:57.050614    5258 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:08:57.050621    5258 cache.go:56] Caching tarball of preloaded images
	I0926 18:08:57.050676    5258 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:08:57.050681    5258 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:08:57.050731    5258 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/false-790000/config.json ...
	I0926 18:08:57.050741    5258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/false-790000/config.json: {Name:mkc64cd6e12b84a5b53eff9fb5e84d5d4848ba50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:08:57.051049    5258 start.go:360] acquireMachinesLock for false-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:08:57.051079    5258 start.go:364] duration metric: took 25.458µs to acquireMachinesLock for "false-790000"
	I0926 18:08:57.051089    5258 start.go:93] Provisioning new machine with config: &{Name:false-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:08:57.051123    5258 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:08:57.055610    5258 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:08:57.070879    5258 start.go:159] libmachine.API.Create for "false-790000" (driver="qemu2")
	I0926 18:08:57.070910    5258 client.go:168] LocalClient.Create starting
	I0926 18:08:57.070970    5258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:08:57.071001    5258 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:57.071010    5258 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:57.071046    5258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:08:57.071070    5258 main.go:141] libmachine: Decoding PEM data...
	I0926 18:08:57.071081    5258 main.go:141] libmachine: Parsing certificate...
	I0926 18:08:57.071405    5258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:08:57.230666    5258 main.go:141] libmachine: Creating SSH key...
	I0926 18:08:57.269162    5258 main.go:141] libmachine: Creating Disk image...
	I0926 18:08:57.269168    5258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:08:57.269353    5258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2
	I0926 18:08:57.278862    5258 main.go:141] libmachine: STDOUT: 
	I0926 18:08:57.278885    5258 main.go:141] libmachine: STDERR: 
	I0926 18:08:57.278955    5258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2 +20000M
	I0926 18:08:57.287063    5258 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:08:57.287081    5258 main.go:141] libmachine: STDERR: 
	I0926 18:08:57.287095    5258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2
	I0926 18:08:57.287100    5258 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:08:57.287112    5258 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:08:57.287145    5258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:4b:90:74:dd:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2
	I0926 18:08:57.288863    5258 main.go:141] libmachine: STDOUT: 
	I0926 18:08:57.288877    5258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:08:57.288898    5258 client.go:171] duration metric: took 217.992ms to LocalClient.Create
	I0926 18:08:59.291060    5258 start.go:128] duration metric: took 2.24001775s to createHost
	I0926 18:08:59.291143    5258 start.go:83] releasing machines lock for "false-790000", held for 2.240173708s
	W0926 18:08:59.291212    5258 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:59.301598    5258 out.go:177] * Deleting "false-790000" in qemu2 ...
	W0926 18:08:59.345224    5258 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:08:59.345252    5258 start.go:729] Will try again in 5 seconds ...
	I0926 18:09:04.347255    5258 start.go:360] acquireMachinesLock for false-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:09:04.347936    5258 start.go:364] duration metric: took 521.916µs to acquireMachinesLock for "false-790000"
	I0926 18:09:04.348074    5258 start.go:93] Provisioning new machine with config: &{Name:false-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:09:04.348306    5258 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:09:04.356678    5258 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:09:04.407921    5258 start.go:159] libmachine.API.Create for "false-790000" (driver="qemu2")
	I0926 18:09:04.407971    5258 client.go:168] LocalClient.Create starting
	I0926 18:09:04.408116    5258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:09:04.408175    5258 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:04.408194    5258 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:04.408263    5258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:09:04.408307    5258 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:04.408319    5258 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:04.408896    5258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:09:04.579035    5258 main.go:141] libmachine: Creating SSH key...
	I0926 18:09:04.740903    5258 main.go:141] libmachine: Creating Disk image...
	I0926 18:09:04.740912    5258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:09:04.741155    5258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2
	I0926 18:09:04.751048    5258 main.go:141] libmachine: STDOUT: 
	I0926 18:09:04.751070    5258 main.go:141] libmachine: STDERR: 
	I0926 18:09:04.751136    5258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2 +20000M
	I0926 18:09:04.759046    5258 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:09:04.759071    5258 main.go:141] libmachine: STDERR: 
	I0926 18:09:04.759083    5258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2
	I0926 18:09:04.759087    5258 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:09:04.759095    5258 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:09:04.759133    5258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:54:29:2d:93:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/false-790000/disk.qcow2
	I0926 18:09:04.760799    5258 main.go:141] libmachine: STDOUT: 
	I0926 18:09:04.760817    5258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:09:04.760829    5258 client.go:171] duration metric: took 352.870917ms to LocalClient.Create
	I0926 18:09:06.762930    5258 start.go:128] duration metric: took 2.414715209s to createHost
	I0926 18:09:06.763026    5258 start.go:83] releasing machines lock for "false-790000", held for 2.415191125s
	W0926 18:09:06.763307    5258 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:06.776811    5258 out.go:201] 
	W0926 18:09:06.781936    5258 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:09:06.781976    5258 out.go:270] * 
	* 
	W0926 18:09:06.784669    5258 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:09:06.802679    5258 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.826342708s)

                                                
                                                
-- stdout --
	* [enable-default-cni-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-790000" primary control-plane node in "enable-default-cni-790000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:09:09.024520    5373 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:09:09.024634    5373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:09.024637    5373 out.go:358] Setting ErrFile to fd 2...
	I0926 18:09:09.024640    5373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:09.024754    5373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:09:09.025891    5373 out.go:352] Setting JSON to false
	I0926 18:09:09.041667    5373 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4112,"bootTime":1727395237,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:09:09.041726    5373 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:09:09.048438    5373 out.go:177] * [enable-default-cni-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:09:09.057380    5373 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:09:09.057431    5373 notify.go:220] Checking for updates...
	I0926 18:09:09.068367    5373 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:09:09.071307    5373 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:09:09.074284    5373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:09:09.081984    5373 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:09:09.086399    5373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:09:09.088094    5373 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:09:09.088155    5373 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:09:09.088202    5373 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:09:09.091263    5373 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:09:09.095568    5373 start.go:297] selected driver: qemu2
	I0926 18:09:09.095573    5373 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:09:09.095577    5373 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:09:09.097669    5373 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:09:09.102268    5373 out.go:177] * Automatically selected the socket_vmnet network
	E0926 18:09:09.105315    5373 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0926 18:09:09.105326    5373 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:09:09.105340    5373 cni.go:84] Creating CNI manager for "bridge"
	I0926 18:09:09.105350    5373 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 18:09:09.105376    5373 start.go:340] cluster config:
	{Name:enable-default-cni-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:09:09.108863    5373 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:09:09.117350    5373 out.go:177] * Starting "enable-default-cni-790000" primary control-plane node in "enable-default-cni-790000" cluster
	I0926 18:09:09.121300    5373 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:09:09.121314    5373 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:09:09.121328    5373 cache.go:56] Caching tarball of preloaded images
	I0926 18:09:09.121388    5373 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:09:09.121393    5373 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:09:09.121464    5373 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/enable-default-cni-790000/config.json ...
	I0926 18:09:09.121474    5373 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/enable-default-cni-790000/config.json: {Name:mk6b66dff39626fbca92a3388a28d6d891816cc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:09:09.121904    5373 start.go:360] acquireMachinesLock for enable-default-cni-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:09:09.121934    5373 start.go:364] duration metric: took 24.667µs to acquireMachinesLock for "enable-default-cni-790000"
	I0926 18:09:09.121944    5373 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:09:09.121969    5373 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:09:09.126343    5373 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:09:09.141955    5373 start.go:159] libmachine.API.Create for "enable-default-cni-790000" (driver="qemu2")
	I0926 18:09:09.141981    5373 client.go:168] LocalClient.Create starting
	I0926 18:09:09.142039    5373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:09:09.142070    5373 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:09.142079    5373 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:09.142118    5373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:09:09.142141    5373 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:09.142146    5373 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:09.142620    5373 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:09:09.301531    5373 main.go:141] libmachine: Creating SSH key...
	I0926 18:09:09.407890    5373 main.go:141] libmachine: Creating Disk image...
	I0926 18:09:09.407896    5373 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:09:09.408071    5373 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2
	I0926 18:09:09.417330    5373 main.go:141] libmachine: STDOUT: 
	I0926 18:09:09.417343    5373 main.go:141] libmachine: STDERR: 
	I0926 18:09:09.417408    5373 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2 +20000M
	I0926 18:09:09.425383    5373 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:09:09.425404    5373 main.go:141] libmachine: STDERR: 
	I0926 18:09:09.425421    5373 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2
	I0926 18:09:09.425426    5373 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:09:09.425438    5373 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:09:09.425463    5373 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:c5:e7:38:fe:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2
	I0926 18:09:09.427169    5373 main.go:141] libmachine: STDOUT: 
	I0926 18:09:09.427182    5373 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:09:09.427203    5373 client.go:171] duration metric: took 285.2315ms to LocalClient.Create
	I0926 18:09:11.429212    5373 start.go:128] duration metric: took 2.307361167s to createHost
	I0926 18:09:11.429225    5373 start.go:83] releasing machines lock for "enable-default-cni-790000", held for 2.307410042s
	W0926 18:09:11.429245    5373 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:11.433806    5373 out.go:177] * Deleting "enable-default-cni-790000" in qemu2 ...
	W0926 18:09:11.467885    5373 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:11.467893    5373 start.go:729] Will try again in 5 seconds ...
	I0926 18:09:16.469830    5373 start.go:360] acquireMachinesLock for enable-default-cni-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:09:16.470143    5373 start.go:364] duration metric: took 222.542µs to acquireMachinesLock for "enable-default-cni-790000"
	I0926 18:09:16.470213    5373 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:09:16.470379    5373 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:09:16.476786    5373 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:09:16.511466    5373 start.go:159] libmachine.API.Create for "enable-default-cni-790000" (driver="qemu2")
	I0926 18:09:16.511521    5373 client.go:168] LocalClient.Create starting
	I0926 18:09:16.511648    5373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:09:16.511702    5373 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:16.511714    5373 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:16.511772    5373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:09:16.511809    5373 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:16.511824    5373 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:16.512259    5373 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:09:16.674809    5373 main.go:141] libmachine: Creating SSH key...
	I0926 18:09:16.755085    5373 main.go:141] libmachine: Creating Disk image...
	I0926 18:09:16.755100    5373 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:09:16.755292    5373 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2
	I0926 18:09:16.764864    5373 main.go:141] libmachine: STDOUT: 
	I0926 18:09:16.764883    5373 main.go:141] libmachine: STDERR: 
	I0926 18:09:16.764967    5373 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2 +20000M
	I0926 18:09:16.773262    5373 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:09:16.773279    5373 main.go:141] libmachine: STDERR: 
	I0926 18:09:16.773291    5373 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2
	I0926 18:09:16.773304    5373 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:09:16.773321    5373 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:09:16.773348    5373 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f8:5a:cf:6a:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/enable-default-cni-790000/disk.qcow2
	I0926 18:09:16.775070    5373 main.go:141] libmachine: STDOUT: 
	I0926 18:09:16.775087    5373 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:09:16.775105    5373 client.go:171] duration metric: took 263.59275ms to LocalClient.Create
	I0926 18:09:18.777296    5373 start.go:128] duration metric: took 2.306995375s to createHost
	I0926 18:09:18.777392    5373 start.go:83] releasing machines lock for "enable-default-cni-790000", held for 2.307350625s
	W0926 18:09:18.777721    5373 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:18.786397    5373 out.go:201] 
	W0926 18:09:18.797440    5373 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:09:18.797490    5373 out.go:270] * 
	* 
	W0926 18:09:18.800216    5373 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:09:18.810343    5373 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.140534625s)

                                                
                                                
-- stdout --
	* [flannel-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-790000" primary control-plane node in "flannel-790000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:09:21.029130    5482 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:09:21.029266    5482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:21.029272    5482 out.go:358] Setting ErrFile to fd 2...
	I0926 18:09:21.029275    5482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:21.029404    5482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:09:21.030492    5482 out.go:352] Setting JSON to false
	I0926 18:09:21.046578    5482 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4124,"bootTime":1727395237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:09:21.046649    5482 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:09:21.052757    5482 out.go:177] * [flannel-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:09:21.060482    5482 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:09:21.060512    5482 notify.go:220] Checking for updates...
	I0926 18:09:21.067617    5482 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:09:21.070548    5482 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:09:21.073626    5482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:09:21.076627    5482 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:09:21.078242    5482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:09:21.081839    5482 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:09:21.081900    5482 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:09:21.081944    5482 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:09:21.086591    5482 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:09:21.091559    5482 start.go:297] selected driver: qemu2
	I0926 18:09:21.091566    5482 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:09:21.091572    5482 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:09:21.093744    5482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:09:21.096580    5482 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:09:21.099621    5482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:09:21.099640    5482 cni.go:84] Creating CNI manager for "flannel"
	I0926 18:09:21.099650    5482 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0926 18:09:21.099678    5482 start.go:340] cluster config:
	{Name:flannel-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:09:21.103557    5482 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:09:21.114528    5482 out.go:177] * Starting "flannel-790000" primary control-plane node in "flannel-790000" cluster
	I0926 18:09:21.118590    5482 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:09:21.118604    5482 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:09:21.118613    5482 cache.go:56] Caching tarball of preloaded images
	I0926 18:09:21.118671    5482 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:09:21.118677    5482 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:09:21.118730    5482 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/flannel-790000/config.json ...
	I0926 18:09:21.118744    5482 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/flannel-790000/config.json: {Name:mkff0ae10f12c27c5f6b46a1e08c45eb84ecaaed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:09:21.119087    5482 start.go:360] acquireMachinesLock for flannel-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:09:21.119118    5482 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "flannel-790000"
	I0926 18:09:21.119128    5482 start.go:93] Provisioning new machine with config: &{Name:flannel-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:09:21.119153    5482 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:09:21.123625    5482 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:09:21.138780    5482 start.go:159] libmachine.API.Create for "flannel-790000" (driver="qemu2")
	I0926 18:09:21.138808    5482 client.go:168] LocalClient.Create starting
	I0926 18:09:21.138880    5482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:09:21.138919    5482 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:21.138929    5482 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:21.138961    5482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:09:21.138985    5482 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:21.138994    5482 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:21.139451    5482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:09:21.299115    5482 main.go:141] libmachine: Creating SSH key...
	I0926 18:09:21.594670    5482 main.go:141] libmachine: Creating Disk image...
	I0926 18:09:21.594681    5482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:09:21.594959    5482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2
	I0926 18:09:21.605158    5482 main.go:141] libmachine: STDOUT: 
	I0926 18:09:21.605181    5482 main.go:141] libmachine: STDERR: 
	I0926 18:09:21.605256    5482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2 +20000M
	I0926 18:09:21.613485    5482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:09:21.613501    5482 main.go:141] libmachine: STDERR: 
	I0926 18:09:21.613524    5482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2
	I0926 18:09:21.613528    5482 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:09:21.613542    5482 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:09:21.613575    5482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:65:22:7a:54:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2
	I0926 18:09:21.615300    5482 main.go:141] libmachine: STDOUT: 
	I0926 18:09:21.615313    5482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:09:21.615331    5482 client.go:171] duration metric: took 476.541958ms to LocalClient.Create
	I0926 18:09:23.617426    5482 start.go:128] duration metric: took 2.498377667s to createHost
	I0926 18:09:23.617524    5482 start.go:83] releasing machines lock for "flannel-790000", held for 2.498531792s
	W0926 18:09:23.617654    5482 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:23.624758    5482 out.go:177] * Deleting "flannel-790000" in qemu2 ...
	W0926 18:09:23.656234    5482 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:23.656261    5482 start.go:729] Will try again in 5 seconds ...
	I0926 18:09:28.657143    5482 start.go:360] acquireMachinesLock for flannel-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:09:28.657790    5482 start.go:364] duration metric: took 521.083µs to acquireMachinesLock for "flannel-790000"
	I0926 18:09:28.657878    5482 start.go:93] Provisioning new machine with config: &{Name:flannel-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:09:28.658114    5482 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:09:28.663712    5482 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:09:28.709812    5482 start.go:159] libmachine.API.Create for "flannel-790000" (driver="qemu2")
	I0926 18:09:28.709872    5482 client.go:168] LocalClient.Create starting
	I0926 18:09:28.709985    5482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:09:28.710057    5482 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:28.710075    5482 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:28.710146    5482 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:09:28.710192    5482 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:28.710204    5482 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:28.710978    5482 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:09:28.878649    5482 main.go:141] libmachine: Creating SSH key...
	I0926 18:09:29.074793    5482 main.go:141] libmachine: Creating Disk image...
	I0926 18:09:29.074805    5482 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:09:29.075012    5482 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2
	I0926 18:09:29.085031    5482 main.go:141] libmachine: STDOUT: 
	I0926 18:09:29.085048    5482 main.go:141] libmachine: STDERR: 
	I0926 18:09:29.085112    5482 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2 +20000M
	I0926 18:09:29.093332    5482 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:09:29.093346    5482 main.go:141] libmachine: STDERR: 
	I0926 18:09:29.093359    5482 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2
	I0926 18:09:29.093364    5482 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:09:29.093373    5482 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:09:29.093413    5482 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:20:1a:ac:c2:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/flannel-790000/disk.qcow2
	I0926 18:09:29.095142    5482 main.go:141] libmachine: STDOUT: 
	I0926 18:09:29.095156    5482 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:09:29.095170    5482 client.go:171] duration metric: took 385.313917ms to LocalClient.Create
	I0926 18:09:31.097372    5482 start.go:128] duration metric: took 2.439311333s to createHost
	I0926 18:09:31.097475    5482 start.go:83] releasing machines lock for "flannel-790000", held for 2.439772084s
	W0926 18:09:31.097926    5482 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:31.106924    5482 out.go:201] 
	W0926 18:09:31.118054    5482 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:09:31.118077    5482 out.go:270] * 
	* 
	W0926 18:09:31.119373    5482 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:09:31.131911    5482 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.812248958s)

                                                
                                                
-- stdout --
	* [bridge-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-790000" primary control-plane node in "bridge-790000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:09:33.534639    5599 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:09:33.534773    5599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:33.534776    5599 out.go:358] Setting ErrFile to fd 2...
	I0926 18:09:33.534779    5599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:33.534919    5599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:09:33.536021    5599 out.go:352] Setting JSON to false
	I0926 18:09:33.552679    5599 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4136,"bootTime":1727395237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:09:33.552780    5599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:09:33.559036    5599 out.go:177] * [bridge-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:09:33.567769    5599 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:09:33.567804    5599 notify.go:220] Checking for updates...
	I0926 18:09:33.575849    5599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:09:33.578879    5599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:09:33.582834    5599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:09:33.585938    5599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:09:33.588877    5599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:09:33.592245    5599 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:09:33.592306    5599 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:09:33.592363    5599 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:09:33.596875    5599 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:09:33.603828    5599 start.go:297] selected driver: qemu2
	I0926 18:09:33.603834    5599 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:09:33.603840    5599 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:09:33.606001    5599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:09:33.609859    5599 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:09:33.612915    5599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:09:33.612934    5599 cni.go:84] Creating CNI manager for "bridge"
	I0926 18:09:33.612938    5599 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 18:09:33.612968    5599 start.go:340] cluster config:
	{Name:bridge-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:09:33.616367    5599 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:09:33.624697    5599 out.go:177] * Starting "bridge-790000" primary control-plane node in "bridge-790000" cluster
	I0926 18:09:33.628819    5599 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:09:33.628831    5599 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:09:33.628839    5599 cache.go:56] Caching tarball of preloaded images
	I0926 18:09:33.628889    5599 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:09:33.628894    5599 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:09:33.628942    5599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/bridge-790000/config.json ...
	I0926 18:09:33.628952    5599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/bridge-790000/config.json: {Name:mkf4fc374b86e53f704cfb2ee5e8fe1be6a30288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:09:33.629357    5599 start.go:360] acquireMachinesLock for bridge-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:09:33.629389    5599 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "bridge-790000"
	I0926 18:09:33.629403    5599 start.go:93] Provisioning new machine with config: &{Name:bridge-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:09:33.629431    5599 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:09:33.634866    5599 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:09:33.650023    5599 start.go:159] libmachine.API.Create for "bridge-790000" (driver="qemu2")
	I0926 18:09:33.650049    5599 client.go:168] LocalClient.Create starting
	I0926 18:09:33.650122    5599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:09:33.650152    5599 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:33.650161    5599 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:33.650201    5599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:09:33.650223    5599 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:33.650236    5599 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:33.650686    5599 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:09:33.812307    5599 main.go:141] libmachine: Creating SSH key...
	I0926 18:09:33.857583    5599 main.go:141] libmachine: Creating Disk image...
	I0926 18:09:33.857589    5599 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:09:33.857779    5599 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2
	I0926 18:09:33.867008    5599 main.go:141] libmachine: STDOUT: 
	I0926 18:09:33.867024    5599 main.go:141] libmachine: STDERR: 
	I0926 18:09:33.867078    5599 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2 +20000M
	I0926 18:09:33.874873    5599 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:09:33.874889    5599 main.go:141] libmachine: STDERR: 
	I0926 18:09:33.874902    5599 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2
	I0926 18:09:33.874906    5599 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:09:33.874920    5599 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:09:33.874945    5599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:35:07:3a:1d:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2
	I0926 18:09:33.876645    5599 main.go:141] libmachine: STDOUT: 
	I0926 18:09:33.876659    5599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:09:33.876682    5599 client.go:171] duration metric: took 226.6385ms to LocalClient.Create
	I0926 18:09:35.878802    5599 start.go:128] duration metric: took 2.249461333s to createHost
	I0926 18:09:35.878906    5599 start.go:83] releasing machines lock for "bridge-790000", held for 2.249626667s
	W0926 18:09:35.879012    5599 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:35.898381    5599 out.go:177] * Deleting "bridge-790000" in qemu2 ...
	W0926 18:09:35.927740    5599 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:35.927763    5599 start.go:729] Will try again in 5 seconds ...
	I0926 18:09:40.929776    5599 start.go:360] acquireMachinesLock for bridge-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:09:40.930340    5599 start.go:364] duration metric: took 470.584µs to acquireMachinesLock for "bridge-790000"
	I0926 18:09:40.930477    5599 start.go:93] Provisioning new machine with config: &{Name:bridge-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:09:40.930724    5599 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:09:40.939253    5599 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:09:40.989404    5599 start.go:159] libmachine.API.Create for "bridge-790000" (driver="qemu2")
	I0926 18:09:40.989450    5599 client.go:168] LocalClient.Create starting
	I0926 18:09:40.989598    5599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:09:40.989671    5599 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:40.989698    5599 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:40.989819    5599 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:09:40.989865    5599 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:40.989879    5599 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:40.990404    5599 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:09:41.158749    5599 main.go:141] libmachine: Creating SSH key...
	I0926 18:09:41.265372    5599 main.go:141] libmachine: Creating Disk image...
	I0926 18:09:41.265379    5599 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:09:41.265560    5599 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2
	I0926 18:09:41.275138    5599 main.go:141] libmachine: STDOUT: 
	I0926 18:09:41.275161    5599 main.go:141] libmachine: STDERR: 
	I0926 18:09:41.275224    5599 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2 +20000M
	I0926 18:09:41.283309    5599 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:09:41.283325    5599 main.go:141] libmachine: STDERR: 
	I0926 18:09:41.283338    5599 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2
	I0926 18:09:41.283342    5599 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:09:41.283350    5599 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:09:41.283378    5599 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:bf:8a:40:09:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/bridge-790000/disk.qcow2
	I0926 18:09:41.285102    5599 main.go:141] libmachine: STDOUT: 
	I0926 18:09:41.285118    5599 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:09:41.285129    5599 client.go:171] duration metric: took 295.688375ms to LocalClient.Create
	I0926 18:09:43.287119    5599 start.go:128] duration metric: took 2.356501125s to createHost
	I0926 18:09:43.287178    5599 start.go:83] releasing machines lock for "bridge-790000", held for 2.356942375s
	W0926 18:09:43.287386    5599 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:43.294856    5599 out.go:201] 
	W0926 18:09:43.296800    5599 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:09:43.296836    5599 out.go:270] * 
	* 
	W0926 18:09:43.298093    5599 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:09:43.311842    5599 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-790000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.868484875s)

                                                
                                                
-- stdout --
	* [kubenet-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-790000" primary control-plane node in "kubenet-790000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:09:45.483051    5708 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:09:45.483170    5708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:45.483173    5708 out.go:358] Setting ErrFile to fd 2...
	I0926 18:09:45.483176    5708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:45.483301    5708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:09:45.484403    5708 out.go:352] Setting JSON to false
	I0926 18:09:45.500623    5708 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4148,"bootTime":1727395237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:09:45.500734    5708 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:09:45.508241    5708 out.go:177] * [kubenet-790000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:09:45.516150    5708 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:09:45.516190    5708 notify.go:220] Checking for updates...
	I0926 18:09:45.522122    5708 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:09:45.525117    5708 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:09:45.529136    5708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:09:45.532161    5708 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:09:45.535080    5708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:09:45.538558    5708 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:09:45.538625    5708 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:09:45.538677    5708 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:09:45.543142    5708 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:09:45.550107    5708 start.go:297] selected driver: qemu2
	I0926 18:09:45.550112    5708 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:09:45.550117    5708 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:09:45.552216    5708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:09:45.556133    5708 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:09:45.559122    5708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:09:45.559140    5708 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0926 18:09:45.559169    5708 start.go:340] cluster config:
	{Name:kubenet-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:09:45.562531    5708 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:09:45.569071    5708 out.go:177] * Starting "kubenet-790000" primary control-plane node in "kubenet-790000" cluster
	I0926 18:09:45.573052    5708 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:09:45.573074    5708 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:09:45.573085    5708 cache.go:56] Caching tarball of preloaded images
	I0926 18:09:45.573153    5708 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:09:45.573159    5708 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:09:45.573228    5708 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/kubenet-790000/config.json ...
	I0926 18:09:45.573239    5708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/kubenet-790000/config.json: {Name:mk8d5009d5cedc0c6a06d6de579e456ecc91e4cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:09:45.573558    5708 start.go:360] acquireMachinesLock for kubenet-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:09:45.573591    5708 start.go:364] duration metric: took 27.541µs to acquireMachinesLock for "kubenet-790000"
	I0926 18:09:45.573602    5708 start.go:93] Provisioning new machine with config: &{Name:kubenet-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:09:45.573642    5708 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:09:45.582080    5708 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:09:45.597963    5708 start.go:159] libmachine.API.Create for "kubenet-790000" (driver="qemu2")
	I0926 18:09:45.598000    5708 client.go:168] LocalClient.Create starting
	I0926 18:09:45.598078    5708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:09:45.598112    5708 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:45.598125    5708 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:45.598161    5708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:09:45.598183    5708 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:45.598192    5708 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:45.598523    5708 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:09:45.761360    5708 main.go:141] libmachine: Creating SSH key...
	I0926 18:09:45.856448    5708 main.go:141] libmachine: Creating Disk image...
	I0926 18:09:45.856457    5708 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:09:45.856641    5708 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2
	I0926 18:09:45.866061    5708 main.go:141] libmachine: STDOUT: 
	I0926 18:09:45.866076    5708 main.go:141] libmachine: STDERR: 
	I0926 18:09:45.866138    5708 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2 +20000M
	I0926 18:09:45.874101    5708 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:09:45.874116    5708 main.go:141] libmachine: STDERR: 
	I0926 18:09:45.874129    5708 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2
	I0926 18:09:45.874134    5708 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:09:45.874158    5708 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:09:45.874184    5708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:e9:d7:dd:9a:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2
	I0926 18:09:45.875863    5708 main.go:141] libmachine: STDOUT: 
	I0926 18:09:45.875876    5708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:09:45.875903    5708 client.go:171] duration metric: took 277.9125ms to LocalClient.Create
	I0926 18:09:47.877998    5708 start.go:128] duration metric: took 2.304449959s to createHost
	I0926 18:09:47.878102    5708 start.go:83] releasing machines lock for "kubenet-790000", held for 2.304624083s
	W0926 18:09:47.878219    5708 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:47.892566    5708 out.go:177] * Deleting "kubenet-790000" in qemu2 ...
	W0926 18:09:47.923238    5708 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:47.923269    5708 start.go:729] Will try again in 5 seconds ...
	I0926 18:09:52.924138    5708 start.go:360] acquireMachinesLock for kubenet-790000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:09:52.924450    5708 start.go:364] duration metric: took 238.792µs to acquireMachinesLock for "kubenet-790000"
	I0926 18:09:52.924480    5708 start.go:93] Provisioning new machine with config: &{Name:kubenet-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-790000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:09:52.924559    5708 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:09:52.945096    5708 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0926 18:09:52.979626    5708 start.go:159] libmachine.API.Create for "kubenet-790000" (driver="qemu2")
	I0926 18:09:52.979669    5708 client.go:168] LocalClient.Create starting
	I0926 18:09:52.979793    5708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:09:52.979860    5708 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:52.979878    5708 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:52.979936    5708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:09:52.979974    5708 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:52.979991    5708 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:52.980593    5708 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:09:53.151491    5708 main.go:141] libmachine: Creating SSH key...
	I0926 18:09:53.254686    5708 main.go:141] libmachine: Creating Disk image...
	I0926 18:09:53.254697    5708 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:09:53.254893    5708 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2
	I0926 18:09:53.264238    5708 main.go:141] libmachine: STDOUT: 
	I0926 18:09:53.264254    5708 main.go:141] libmachine: STDERR: 
	I0926 18:09:53.264328    5708 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2 +20000M
	I0926 18:09:53.272598    5708 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:09:53.272614    5708 main.go:141] libmachine: STDERR: 
	I0926 18:09:53.272625    5708 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2
	I0926 18:09:53.272630    5708 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:09:53.272638    5708 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:09:53.272682    5708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:81:d5:b6:d1:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/kubenet-790000/disk.qcow2
	I0926 18:09:53.274381    5708 main.go:141] libmachine: STDOUT: 
	I0926 18:09:53.274397    5708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:09:53.274408    5708 client.go:171] duration metric: took 294.750209ms to LocalClient.Create
	I0926 18:09:55.276795    5708 start.go:128] duration metric: took 2.352322125s to createHost
	I0926 18:09:55.276899    5708 start.go:83] releasing machines lock for "kubenet-790000", held for 2.35255825s
	W0926 18:09:55.277280    5708 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:09:55.292047    5708 out.go:201] 
	W0926 18:09:55.297095    5708 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:09:55.297114    5708 out.go:270] * 
	* 
	W0926 18:09:55.299274    5708 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:09:55.311009    5708 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.087154959s)

                                                
                                                
-- stdout --
	* [old-k8s-version-187000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-187000" primary control-plane node in "old-k8s-version-187000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-187000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:09:57.531850    5822 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:09:57.532031    5822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:57.532035    5822 out.go:358] Setting ErrFile to fd 2...
	I0926 18:09:57.532037    5822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:09:57.532180    5822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:09:57.533500    5822 out.go:352] Setting JSON to false
	I0926 18:09:57.551951    5822 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4160,"bootTime":1727395237,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:09:57.552073    5822 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:09:57.558180    5822 out.go:177] * [old-k8s-version-187000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:09:57.565099    5822 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:09:57.565219    5822 notify.go:220] Checking for updates...
	I0926 18:09:57.571098    5822 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:09:57.574114    5822 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:09:57.577161    5822 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:09:57.580170    5822 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:09:57.587107    5822 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:09:57.594435    5822 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:09:57.594523    5822 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:09:57.594570    5822 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:09:57.601948    5822 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:09:57.612977    5822 start.go:297] selected driver: qemu2
	I0926 18:09:57.612985    5822 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:09:57.612992    5822 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:09:57.615635    5822 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:09:57.620120    5822 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:09:57.627142    5822 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:09:57.627161    5822 cni.go:84] Creating CNI manager for ""
	I0926 18:09:57.627182    5822 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0926 18:09:57.627201    5822 start.go:340] cluster config:
	{Name:old-k8s-version-187000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:09:57.630672    5822 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:09:57.639115    5822 out.go:177] * Starting "old-k8s-version-187000" primary control-plane node in "old-k8s-version-187000" cluster
	I0926 18:09:57.647101    5822 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 18:09:57.647122    5822 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0926 18:09:57.647131    5822 cache.go:56] Caching tarball of preloaded images
	I0926 18:09:57.647221    5822 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:09:57.647227    5822 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0926 18:09:57.647289    5822 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/old-k8s-version-187000/config.json ...
	I0926 18:09:57.647300    5822 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/old-k8s-version-187000/config.json: {Name:mkf0c5552f956ef8c37947831c42731a15d065d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:09:57.647599    5822 start.go:360] acquireMachinesLock for old-k8s-version-187000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:09:57.647634    5822 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "old-k8s-version-187000"
	I0926 18:09:57.647646    5822 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:09:57.647680    5822 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:09:57.652086    5822 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:09:57.667779    5822 start.go:159] libmachine.API.Create for "old-k8s-version-187000" (driver="qemu2")
	I0926 18:09:57.667819    5822 client.go:168] LocalClient.Create starting
	I0926 18:09:57.667896    5822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:09:57.667926    5822 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:57.667937    5822 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:57.667975    5822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:09:57.667998    5822 main.go:141] libmachine: Decoding PEM data...
	I0926 18:09:57.668004    5822 main.go:141] libmachine: Parsing certificate...
	I0926 18:09:57.670307    5822 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:09:58.009136    5822 main.go:141] libmachine: Creating SSH key...
	I0926 18:09:58.135055    5822 main.go:141] libmachine: Creating Disk image...
	I0926 18:09:58.135064    5822 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:09:58.135252    5822 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2
	I0926 18:09:58.154833    5822 main.go:141] libmachine: STDOUT: 
	I0926 18:09:58.154860    5822 main.go:141] libmachine: STDERR: 
	I0926 18:09:58.154937    5822 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2 +20000M
	I0926 18:09:58.164200    5822 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:09:58.164231    5822 main.go:141] libmachine: STDERR: 
	I0926 18:09:58.164254    5822 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2
	I0926 18:09:58.164261    5822 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:09:58.164274    5822 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:09:58.164314    5822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:63:ab:fb:14:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2
	I0926 18:09:58.166408    5822 main.go:141] libmachine: STDOUT: 
	I0926 18:09:58.166442    5822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:09:58.166472    5822 client.go:171] duration metric: took 498.66925ms to LocalClient.Create
	I0926 18:10:00.168591    5822 start.go:128] duration metric: took 2.521013167s to createHost
	I0926 18:10:00.168688    5822 start.go:83] releasing machines lock for "old-k8s-version-187000", held for 2.521178291s
	W0926 18:10:00.168763    5822 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:00.181167    5822 out.go:177] * Deleting "old-k8s-version-187000" in qemu2 ...
	W0926 18:10:00.218011    5822 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:00.218045    5822 start.go:729] Will try again in 5 seconds ...
	I0926 18:10:05.219975    5822 start.go:360] acquireMachinesLock for old-k8s-version-187000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:05.220614    5822 start.go:364] duration metric: took 540.458µs to acquireMachinesLock for "old-k8s-version-187000"
	I0926 18:10:05.220763    5822 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:10:05.221084    5822 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:10:05.232644    5822 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:10:05.281470    5822 start.go:159] libmachine.API.Create for "old-k8s-version-187000" (driver="qemu2")
	I0926 18:10:05.281547    5822 client.go:168] LocalClient.Create starting
	I0926 18:10:05.281667    5822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:10:05.281742    5822 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:05.281764    5822 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:05.281829    5822 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:10:05.281873    5822 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:05.281884    5822 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:05.282539    5822 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:10:05.449272    5822 main.go:141] libmachine: Creating SSH key...
	I0926 18:10:05.522296    5822 main.go:141] libmachine: Creating Disk image...
	I0926 18:10:05.522302    5822 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:10:05.522506    5822 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2
	I0926 18:10:05.531772    5822 main.go:141] libmachine: STDOUT: 
	I0926 18:10:05.531794    5822 main.go:141] libmachine: STDERR: 
	I0926 18:10:05.531852    5822 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2 +20000M
	I0926 18:10:05.539790    5822 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:10:05.539805    5822 main.go:141] libmachine: STDERR: 
	I0926 18:10:05.539816    5822 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2
	I0926 18:10:05.539821    5822 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:10:05.539837    5822 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:05.539869    5822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:b6:1c:57:a0:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2
	I0926 18:10:05.541591    5822 main.go:141] libmachine: STDOUT: 
	I0926 18:10:05.541605    5822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:05.541619    5822 client.go:171] duration metric: took 260.080042ms to LocalClient.Create
	I0926 18:10:07.543652    5822 start.go:128] duration metric: took 2.322665292s to createHost
	I0926 18:10:07.543688    5822 start.go:83] releasing machines lock for "old-k8s-version-187000", held for 2.323177667s
	W0926 18:10:07.543827    5822 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-187000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:07.555759    5822 out.go:201] 
	W0926 18:10:07.565698    5822 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:07.565706    5822 out.go:270] * 
	* 
	W0926 18:10:07.566179    5822 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:10:07.574704    5822 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (31.94475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-187000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-187000 create -f testdata/busybox.yaml: exit status 1 (26.842917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-187000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-187000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (29.360583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (30.291167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-187000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-187000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-187000 describe deploy/metrics-server -n kube-system: exit status 1 (27.721417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-187000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-187000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (30.212458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.1799265s)

                                                
                                                
-- stdout --
	* [old-k8s-version-187000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-187000" primary control-plane node in "old-k8s-version-187000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-187000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-187000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:09.868404    5865 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:09.868519    5865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:09.868523    5865 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:09.868525    5865 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:09.868637    5865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:09.869653    5865 out.go:352] Setting JSON to false
	I0926 18:10:09.885667    5865 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4172,"bootTime":1727395237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:10:09.885739    5865 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:10:09.890703    5865 out.go:177] * [old-k8s-version-187000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:10:09.897561    5865 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:10:09.897601    5865 notify.go:220] Checking for updates...
	I0926 18:10:09.905572    5865 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:10:09.908601    5865 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:10:09.911576    5865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:10:09.914543    5865 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:10:09.917622    5865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:10:09.920865    5865 config.go:182] Loaded profile config "old-k8s-version-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0926 18:10:09.924490    5865 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0926 18:10:09.927634    5865 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:10:09.932601    5865 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 18:10:09.939583    5865 start.go:297] selected driver: qemu2
	I0926 18:10:09.939590    5865 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-187000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:09.939652    5865 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:10:09.941942    5865 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:10:09.941968    5865 cni.go:84] Creating CNI manager for ""
	I0926 18:10:09.941991    5865 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0926 18:10:09.942021    5865 start.go:340] cluster config:
	{Name:old-k8s-version-187000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-187000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:09.945533    5865 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:09.947060    5865 out.go:177] * Starting "old-k8s-version-187000" primary control-plane node in "old-k8s-version-187000" cluster
	I0926 18:10:09.954607    5865 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 18:10:09.954619    5865 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0926 18:10:09.954629    5865 cache.go:56] Caching tarball of preloaded images
	I0926 18:10:09.954689    5865 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:10:09.954695    5865 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0926 18:10:09.954745    5865 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/old-k8s-version-187000/config.json ...
	I0926 18:10:09.955242    5865 start.go:360] acquireMachinesLock for old-k8s-version-187000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:09.955277    5865 start.go:364] duration metric: took 28.042µs to acquireMachinesLock for "old-k8s-version-187000"
	I0926 18:10:09.955289    5865 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:10:09.955295    5865 fix.go:54] fixHost starting: 
	I0926 18:10:09.955411    5865 fix.go:112] recreateIfNeeded on old-k8s-version-187000: state=Stopped err=<nil>
	W0926 18:10:09.955420    5865 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:10:09.958558    5865 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-187000" ...
	I0926 18:10:09.966547    5865 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:09.966583    5865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:b6:1c:57:a0:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2
	I0926 18:10:09.968496    5865 main.go:141] libmachine: STDOUT: 
	I0926 18:10:09.968514    5865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:09.968543    5865 fix.go:56] duration metric: took 13.248958ms for fixHost
	I0926 18:10:09.968546    5865 start.go:83] releasing machines lock for "old-k8s-version-187000", held for 13.265958ms
	W0926 18:10:09.968552    5865 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:09.968594    5865 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:09.968598    5865 start.go:729] Will try again in 5 seconds ...
	I0926 18:10:14.970447    5865 start.go:360] acquireMachinesLock for old-k8s-version-187000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:14.970660    5865 start.go:364] duration metric: took 167.833µs to acquireMachinesLock for "old-k8s-version-187000"
	I0926 18:10:14.970701    5865 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:10:14.970709    5865 fix.go:54] fixHost starting: 
	I0926 18:10:14.971015    5865 fix.go:112] recreateIfNeeded on old-k8s-version-187000: state=Stopped err=<nil>
	W0926 18:10:14.971028    5865 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:10:14.980284    5865 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-187000" ...
	I0926 18:10:14.983287    5865 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:14.983421    5865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:b6:1c:57:a0:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/old-k8s-version-187000/disk.qcow2
	I0926 18:10:14.988055    5865 main.go:141] libmachine: STDOUT: 
	I0926 18:10:14.988093    5865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:14.988136    5865 fix.go:56] duration metric: took 17.4275ms for fixHost
	I0926 18:10:14.988144    5865 start.go:83] releasing machines lock for "old-k8s-version-187000", held for 17.471584ms
	W0926 18:10:14.988230    5865 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-187000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-187000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:14.995266    5865 out.go:201] 
	W0926 18:10:14.999159    5865 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:14.999172    5865 out.go:270] * 
	* 
	W0926 18:10:15.000178    5865 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:10:15.010249    5865 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-187000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (50.141667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-187000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (31.668709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-187000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-187000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-187000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.060208ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-187000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-187000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (29.717375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-187000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (28.775542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-187000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-187000 --alsologtostderr -v=1: exit status 83 (43.179084ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-187000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-187000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:15.258232    5888 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:15.259117    5888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:15.259121    5888 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:15.259123    5888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:15.259255    5888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:15.259458    5888 out.go:352] Setting JSON to false
	I0926 18:10:15.259466    5888 mustload.go:65] Loading cluster: old-k8s-version-187000
	I0926 18:10:15.259679    5888 config.go:182] Loaded profile config "old-k8s-version-187000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0926 18:10:15.264526    5888 out.go:177] * The control-plane node old-k8s-version-187000 host is not running: state=Stopped
	I0926 18:10:15.267532    5888 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-187000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-187000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (29.128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (29.380625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-187000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-917000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-917000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.867447458s)

                                                
                                                
-- stdout --
	* [embed-certs-917000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-917000" primary control-plane node in "embed-certs-917000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-917000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:15.585060    5905 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:15.585221    5905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:15.585224    5905 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:15.585227    5905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:15.585355    5905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:15.586418    5905 out.go:352] Setting JSON to false
	I0926 18:10:15.603289    5905 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4178,"bootTime":1727395237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:10:15.603362    5905 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:10:15.608005    5905 out.go:177] * [embed-certs-917000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:10:15.613911    5905 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:10:15.613984    5905 notify.go:220] Checking for updates...
	I0926 18:10:15.619907    5905 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:10:15.622896    5905 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:10:15.625904    5905 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:10:15.628907    5905 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:10:15.630460    5905 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:10:15.634214    5905 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:15.634273    5905 config.go:182] Loaded profile config "stopped-upgrade-211000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0926 18:10:15.634313    5905 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:10:15.638911    5905 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:10:15.644862    5905 start.go:297] selected driver: qemu2
	I0926 18:10:15.644868    5905 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:10:15.644873    5905 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:10:15.647176    5905 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:10:15.649893    5905 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:10:15.653071    5905 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:10:15.653089    5905 cni.go:84] Creating CNI manager for ""
	I0926 18:10:15.653111    5905 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:10:15.653115    5905 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 18:10:15.653151    5905 start.go:340] cluster config:
	{Name:embed-certs-917000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:15.656780    5905 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:15.663939    5905 out.go:177] * Starting "embed-certs-917000" primary control-plane node in "embed-certs-917000" cluster
	I0926 18:10:15.667870    5905 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:10:15.667884    5905 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:10:15.667893    5905 cache.go:56] Caching tarball of preloaded images
	I0926 18:10:15.667942    5905 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:10:15.667947    5905 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:10:15.668005    5905 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/embed-certs-917000/config.json ...
	I0926 18:10:15.668015    5905 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/embed-certs-917000/config.json: {Name:mk3d1617f6bbc05e3e5c6969d26289b396306731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:10:15.668221    5905 start.go:360] acquireMachinesLock for embed-certs-917000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:15.668254    5905 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "embed-certs-917000"
	I0926 18:10:15.668266    5905 start.go:93] Provisioning new machine with config: &{Name:embed-certs-917000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:10:15.668297    5905 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:10:15.675845    5905 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:10:15.691577    5905 start.go:159] libmachine.API.Create for "embed-certs-917000" (driver="qemu2")
	I0926 18:10:15.691605    5905 client.go:168] LocalClient.Create starting
	I0926 18:10:15.691687    5905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:10:15.691719    5905 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:15.691740    5905 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:15.691783    5905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:10:15.691809    5905 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:15.691818    5905 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:15.692181    5905 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:10:15.852030    5905 main.go:141] libmachine: Creating SSH key...
	I0926 18:10:15.968143    5905 main.go:141] libmachine: Creating Disk image...
	I0926 18:10:15.968151    5905 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:10:15.968342    5905 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2
	I0926 18:10:15.977768    5905 main.go:141] libmachine: STDOUT: 
	I0926 18:10:15.977796    5905 main.go:141] libmachine: STDERR: 
	I0926 18:10:15.977857    5905 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2 +20000M
	I0926 18:10:15.985720    5905 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:10:15.985734    5905 main.go:141] libmachine: STDERR: 
	I0926 18:10:15.985751    5905 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2
	I0926 18:10:15.985757    5905 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:10:15.985768    5905 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:15.985797    5905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:53:f6:d1:83:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2
	I0926 18:10:15.987395    5905 main.go:141] libmachine: STDOUT: 
	I0926 18:10:15.987410    5905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:15.987430    5905 client.go:171] duration metric: took 295.835ms to LocalClient.Create
	I0926 18:10:17.989545    5905 start.go:128] duration metric: took 2.3213415s to createHost
	I0926 18:10:17.989610    5905 start.go:83] releasing machines lock for "embed-certs-917000", held for 2.32147125s
	W0926 18:10:17.989733    5905 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:18.004649    5905 out.go:177] * Deleting "embed-certs-917000" in qemu2 ...
	W0926 18:10:18.030613    5905 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:18.030628    5905 start.go:729] Will try again in 5 seconds ...
	I0926 18:10:23.032480    5905 start.go:360] acquireMachinesLock for embed-certs-917000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:23.032736    5905 start.go:364] duration metric: took 215.5µs to acquireMachinesLock for "embed-certs-917000"
	I0926 18:10:23.032828    5905 start.go:93] Provisioning new machine with config: &{Name:embed-certs-917000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:10:23.032937    5905 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:10:23.046312    5905 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:10:23.080380    5905 start.go:159] libmachine.API.Create for "embed-certs-917000" (driver="qemu2")
	I0926 18:10:23.080423    5905 client.go:168] LocalClient.Create starting
	I0926 18:10:23.080526    5905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:10:23.080585    5905 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:23.080600    5905 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:23.080657    5905 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:10:23.080696    5905 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:23.080709    5905 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:23.081407    5905 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:10:23.246956    5905 main.go:141] libmachine: Creating SSH key...
	I0926 18:10:23.354519    5905 main.go:141] libmachine: Creating Disk image...
	I0926 18:10:23.354526    5905 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:10:23.354710    5905 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2
	I0926 18:10:23.364066    5905 main.go:141] libmachine: STDOUT: 
	I0926 18:10:23.364081    5905 main.go:141] libmachine: STDERR: 
	I0926 18:10:23.364145    5905 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2 +20000M
	I0926 18:10:23.372063    5905 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:10:23.372085    5905 main.go:141] libmachine: STDERR: 
	I0926 18:10:23.372099    5905 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2
	I0926 18:10:23.372105    5905 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:10:23.372111    5905 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:23.372141    5905 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:0c:45:82:68:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2
	I0926 18:10:23.373790    5905 main.go:141] libmachine: STDOUT: 
	I0926 18:10:23.373805    5905 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:23.373821    5905 client.go:171] duration metric: took 293.407ms to LocalClient.Create
	I0926 18:10:25.375936    5905 start.go:128] duration metric: took 2.343089708s to createHost
	I0926 18:10:25.376016    5905 start.go:83] releasing machines lock for "embed-certs-917000", held for 2.343386417s
	W0926 18:10:25.376427    5905 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-917000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:25.385972    5905 out.go:201] 
	W0926 18:10:25.397072    5905 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:25.397102    5905 out.go:270] * 
	* 
	W0926 18:10:25.399606    5905 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:10:25.409963    5905 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-917000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000: exit status 7 (68.294375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-917000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-917000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-917000 create -f testdata/busybox.yaml: exit status 1 (31.236417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-917000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-917000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000: exit status 7 (30.396625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-917000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000: exit status 7 (30.188833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-917000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-917000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-917000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-917000 describe deploy/metrics-server -n kube-system: exit status 1 (27.539583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-917000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-917000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000: exit status 7 (30.132875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-917000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-917000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-917000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.395000083s)

                                                
                                                
-- stdout --
	* [embed-certs-917000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-917000" primary control-plane node in "embed-certs-917000" cluster
	* Restarting existing qemu2 VM for "embed-certs-917000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-917000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:27.764874    5947 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:27.764993    5947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:27.764996    5947 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:27.764998    5947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:27.765137    5947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:27.766171    5947 out.go:352] Setting JSON to false
	I0926 18:10:27.782916    5947 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4190,"bootTime":1727395237,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:10:27.782988    5947 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:10:27.787382    5947 out.go:177] * [embed-certs-917000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:10:27.793327    5947 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:10:27.793406    5947 notify.go:220] Checking for updates...
	I0926 18:10:27.801373    5947 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:10:27.808348    5947 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:10:27.811412    5947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:10:27.814341    5947 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:10:27.817333    5947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:10:27.820621    5947 config.go:182] Loaded profile config "embed-certs-917000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:27.820870    5947 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:10:27.825368    5947 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 18:10:27.832377    5947 start.go:297] selected driver: qemu2
	I0926 18:10:27.832384    5947 start.go:901] validating driver "qemu2" against &{Name:embed-certs-917000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-917000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:27.832440    5947 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:10:27.834653    5947 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:10:27.834683    5947 cni.go:84] Creating CNI manager for ""
	I0926 18:10:27.834714    5947 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:10:27.834736    5947 start.go:340] cluster config:
	{Name:embed-certs-917000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-917000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:27.837992    5947 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:27.845350    5947 out.go:177] * Starting "embed-certs-917000" primary control-plane node in "embed-certs-917000" cluster
	I0926 18:10:27.849404    5947 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:10:27.849421    5947 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:10:27.849431    5947 cache.go:56] Caching tarball of preloaded images
	I0926 18:10:27.849492    5947 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:10:27.849498    5947 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:10:27.849565    5947 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/embed-certs-917000/config.json ...
	I0926 18:10:27.850040    5947 start.go:360] acquireMachinesLock for embed-certs-917000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:27.850066    5947 start.go:364] duration metric: took 20µs to acquireMachinesLock for "embed-certs-917000"
	I0926 18:10:27.850074    5947 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:10:27.850080    5947 fix.go:54] fixHost starting: 
	I0926 18:10:27.850185    5947 fix.go:112] recreateIfNeeded on embed-certs-917000: state=Stopped err=<nil>
	W0926 18:10:27.850193    5947 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:10:27.858367    5947 out.go:177] * Restarting existing qemu2 VM for "embed-certs-917000" ...
	I0926 18:10:27.862333    5947 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:27.862364    5947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:0c:45:82:68:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2
	I0926 18:10:27.864135    5947 main.go:141] libmachine: STDOUT: 
	I0926 18:10:27.864150    5947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:27.864177    5947 fix.go:56] duration metric: took 14.097792ms for fixHost
	I0926 18:10:27.864181    5947 start.go:83] releasing machines lock for "embed-certs-917000", held for 14.111625ms
	W0926 18:10:27.864186    5947 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:27.864227    5947 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:27.864231    5947 start.go:729] Will try again in 5 seconds ...
	I0926 18:10:32.865479    5947 start.go:360] acquireMachinesLock for embed-certs-917000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:33.049982    5947 start.go:364] duration metric: took 184.428208ms to acquireMachinesLock for "embed-certs-917000"
	I0926 18:10:33.050046    5947 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:10:33.050062    5947 fix.go:54] fixHost starting: 
	I0926 18:10:33.050776    5947 fix.go:112] recreateIfNeeded on embed-certs-917000: state=Stopped err=<nil>
	W0926 18:10:33.050803    5947 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:10:33.056354    5947 out.go:177] * Restarting existing qemu2 VM for "embed-certs-917000" ...
	I0926 18:10:33.077285    5947 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:33.077444    5947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:0c:45:82:68:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/embed-certs-917000/disk.qcow2
	I0926 18:10:33.089576    5947 main.go:141] libmachine: STDOUT: 
	I0926 18:10:33.089660    5947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:33.089768    5947 fix.go:56] duration metric: took 39.692ms for fixHost
	I0926 18:10:33.089792    5947 start.go:83] releasing machines lock for "embed-certs-917000", held for 39.780375ms
	W0926 18:10:33.090020    5947 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-917000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-917000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:33.097240    5947 out.go:201] 
	W0926 18:10:33.101431    5947 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:33.101459    5947 out.go:270] * 
	* 
	W0926 18:10:33.103791    5947 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:10:33.124241    5947 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-917000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000: exit status 7 (57.275209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-917000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.979810334s)

                                                
                                                
-- stdout --
	* [no-preload-421000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-421000" primary control-plane node in "no-preload-421000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-421000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:30.571571    5964 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:30.571705    5964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:30.571709    5964 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:30.571711    5964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:30.571849    5964 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:30.572949    5964 out.go:352] Setting JSON to false
	I0926 18:10:30.589686    5964 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4193,"bootTime":1727395237,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:10:30.589756    5964 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:10:30.594300    5964 out.go:177] * [no-preload-421000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:10:30.601248    5964 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:10:30.601348    5964 notify.go:220] Checking for updates...
	I0926 18:10:30.608228    5964 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:10:30.611190    5964 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:10:30.612686    5964 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:10:30.616161    5964 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:10:30.619138    5964 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:10:30.622574    5964 config.go:182] Loaded profile config "embed-certs-917000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:30.622632    5964 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:30.622682    5964 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:10:30.627150    5964 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:10:30.634260    5964 start.go:297] selected driver: qemu2
	I0926 18:10:30.634268    5964 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:10:30.634276    5964 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:10:30.636665    5964 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:10:30.640145    5964 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:10:30.643259    5964 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:10:30.643282    5964 cni.go:84] Creating CNI manager for ""
	I0926 18:10:30.643311    5964 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:10:30.643318    5964 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 18:10:30.643355    5964 start.go:340] cluster config:
	{Name:no-preload-421000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:30.647034    5964 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:30.654163    5964 out.go:177] * Starting "no-preload-421000" primary control-plane node in "no-preload-421000" cluster
	I0926 18:10:30.658201    5964 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:10:30.658275    5964 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/no-preload-421000/config.json ...
	I0926 18:10:30.658290    5964 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/no-preload-421000/config.json: {Name:mk4ce3d0a368af5a2a577012726585fa4daffdad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:10:30.658290    5964 cache.go:107] acquiring lock: {Name:mk9fe0dc2128d7589ccdf16b00551b774f1e3ad0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:30.658293    5964 cache.go:107] acquiring lock: {Name:mka2794e14c3d83963291f7ccf8a15aef76e08bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:30.658307    5964 cache.go:107] acquiring lock: {Name:mkbb520ce013d82b322bcf16acf008c83bc86f05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:30.658372    5964 cache.go:115] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0926 18:10:30.658378    5964 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 86.5µs
	I0926 18:10:30.658387    5964 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0926 18:10:30.658394    5964 cache.go:107] acquiring lock: {Name:mk63edc18738ae22f0822a069a886319205bbb36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:30.658462    5964 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0926 18:10:30.658468    5964 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0926 18:10:30.658530    5964 cache.go:107] acquiring lock: {Name:mk34516a2cdcac63bb9f33dd4f6d722e48075ab5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:30.658554    5964 cache.go:107] acquiring lock: {Name:mk8b39772f709d469d2f3a2067788c1438bbdefc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:30.658629    5964 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0926 18:10:30.658631    5964 cache.go:107] acquiring lock: {Name:mka191bab5daac44613d53489a541ed562ed2e7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:30.658680    5964 cache.go:107] acquiring lock: {Name:mk39e1ef9abbd9afe643b5af5519125f91230536 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:30.658687    5964 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0926 18:10:30.658651    5964 start.go:360] acquireMachinesLock for no-preload-421000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:30.658726    5964 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0926 18:10:30.658796    5964 start.go:364] duration metric: took 70.375µs to acquireMachinesLock for "no-preload-421000"
	I0926 18:10:30.658798    5964 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0926 18:10:30.658814    5964 start.go:93] Provisioning new machine with config: &{Name:no-preload-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:10:30.658872    5964 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:10:30.658969    5964 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0926 18:10:30.666067    5964 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:10:30.670961    5964 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0926 18:10:30.670976    5964 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0926 18:10:30.670993    5964 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0926 18:10:30.671369    5964 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0926 18:10:30.673335    5964 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0926 18:10:30.673433    5964 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0926 18:10:30.673445    5964 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0926 18:10:30.683963    5964 start.go:159] libmachine.API.Create for "no-preload-421000" (driver="qemu2")
	I0926 18:10:30.683985    5964 client.go:168] LocalClient.Create starting
	I0926 18:10:30.684046    5964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:10:30.684079    5964 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:30.684092    5964 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:30.684131    5964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:10:30.684154    5964 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:30.684163    5964 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:30.684459    5964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:10:30.977069    5964 main.go:141] libmachine: Creating SSH key...
	I0926 18:10:31.029781    5964 main.go:141] libmachine: Creating Disk image...
	I0926 18:10:31.029788    5964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:10:31.029948    5964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2
	I0926 18:10:31.039435    5964 main.go:141] libmachine: STDOUT: 
	I0926 18:10:31.039451    5964 main.go:141] libmachine: STDERR: 
	I0926 18:10:31.039496    5964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2 +20000M
	I0926 18:10:31.047734    5964 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:10:31.047746    5964 main.go:141] libmachine: STDERR: 
	I0926 18:10:31.047758    5964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2
	I0926 18:10:31.047762    5964 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:10:31.047774    5964 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:31.047800    5964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:98:a4:f7:7e:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2
	I0926 18:10:31.049549    5964 main.go:141] libmachine: STDOUT: 
	I0926 18:10:31.049562    5964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:31.049580    5964 client.go:171] duration metric: took 365.609208ms to LocalClient.Create
	I0926 18:10:31.067196    5964 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0926 18:10:31.090859    5964 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0926 18:10:31.101085    5964 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0926 18:10:31.110383    5964 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0926 18:10:31.149776    5964 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0926 18:10:31.184210    5964 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0926 18:10:31.206853    5964 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0926 18:10:31.206874    5964 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 548.428459ms
	I0926 18:10:31.206887    5964 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0926 18:10:31.239038    5964 cache.go:162] opening:  /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0926 18:10:33.049782    5964 start.go:128] duration metric: took 2.391011708s to createHost
	I0926 18:10:33.049835    5964 start.go:83] releasing machines lock for "no-preload-421000", held for 2.391155833s
	W0926 18:10:33.049889    5964 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:33.074313    5964 out.go:177] * Deleting "no-preload-421000" in qemu2 ...
	W0926 18:10:33.135409    5964 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:33.135430    5964 start.go:729] Will try again in 5 seconds ...
	I0926 18:10:33.837771    5964 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0926 18:10:33.837782    5964 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.179372042s
	I0926 18:10:33.837792    5964 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0926 18:10:35.445398    5964 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0926 18:10:35.445468    5964 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.787180625s
	I0926 18:10:35.445500    5964 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0926 18:10:35.680415    5964 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0926 18:10:35.680464    5964 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 5.022428542s
	I0926 18:10:35.680488    5964 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0926 18:10:35.714390    5964 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0926 18:10:35.714427    5964 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 5.056013375s
	I0926 18:10:35.714449    5964 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0926 18:10:35.830189    5964 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0926 18:10:35.830247    5964 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 5.172233542s
	I0926 18:10:35.830274    5964 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0926 18:10:38.135392    5964 start.go:360] acquireMachinesLock for no-preload-421000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:38.135837    5964 start.go:364] duration metric: took 366µs to acquireMachinesLock for "no-preload-421000"
	I0926 18:10:38.135983    5964 start.go:93] Provisioning new machine with config: &{Name:no-preload-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:10:38.136213    5964 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:10:38.145765    5964 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:10:38.197276    5964 start.go:159] libmachine.API.Create for "no-preload-421000" (driver="qemu2")
	I0926 18:10:38.197335    5964 client.go:168] LocalClient.Create starting
	I0926 18:10:38.197459    5964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:10:38.197523    5964 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:38.197544    5964 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:38.197610    5964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:10:38.197653    5964 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:38.197669    5964 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:38.198179    5964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:10:38.428910    5964 main.go:141] libmachine: Creating SSH key...
	I0926 18:10:38.455665    5964 main.go:141] libmachine: Creating Disk image...
	I0926 18:10:38.455671    5964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:10:38.455849    5964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2
	I0926 18:10:38.465098    5964 main.go:141] libmachine: STDOUT: 
	I0926 18:10:38.465157    5964 main.go:141] libmachine: STDERR: 
	I0926 18:10:38.465213    5964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2 +20000M
	I0926 18:10:38.473162    5964 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:10:38.473181    5964 main.go:141] libmachine: STDERR: 
	I0926 18:10:38.473198    5964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2
	I0926 18:10:38.473202    5964 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:10:38.473213    5964 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:38.473256    5964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:fe:7d:6f:2d:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2
	I0926 18:10:38.474860    5964 main.go:141] libmachine: STDOUT: 
	I0926 18:10:38.474874    5964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:38.474886    5964 client.go:171] duration metric: took 277.560584ms to LocalClient.Create
	I0926 18:10:39.420784    5964 cache.go:157] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0926 18:10:39.420843    5964 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.762910916s
	I0926 18:10:39.420878    5964 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0926 18:10:39.420936    5964 cache.go:87] Successfully saved all images to host disk.
	I0926 18:10:40.477072    5964 start.go:128] duration metric: took 2.340929542s to createHost
	I0926 18:10:40.477160    5964 start.go:83] releasing machines lock for "no-preload-421000", held for 2.341392583s
	W0926 18:10:40.477471    5964 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-421000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-421000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:40.485065    5964 out.go:201] 
	W0926 18:10:40.496101    5964 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:40.496137    5964 out.go:270] * 
	* 
	W0926 18:10:40.498202    5964 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:10:40.509031    5964 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (65.765666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-917000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000: exit status 7 (31.000792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-917000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-917000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-917000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-917000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.628875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-917000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-917000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000: exit status 7 (29.880041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-917000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-917000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000: exit status 7 (29.356375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-917000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-917000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-917000 --alsologtostderr -v=1: exit status 83 (48.140125ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-917000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-917000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:33.377039    6014 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:33.377176    6014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:33.377180    6014 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:33.377182    6014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:33.377301    6014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:33.377516    6014 out.go:352] Setting JSON to false
	I0926 18:10:33.377527    6014 mustload.go:65] Loading cluster: embed-certs-917000
	I0926 18:10:33.377747    6014 config.go:182] Loaded profile config "embed-certs-917000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:33.382516    6014 out.go:177] * The control-plane node embed-certs-917000 host is not running: state=Stopped
	I0926 18:10:33.393694    6014 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-917000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-917000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000: exit status 7 (29.2975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-917000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000: exit status 7 (29.322042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-917000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-369000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-369000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.888748042s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-369000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-369000" primary control-plane node in "default-k8s-diff-port-369000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-369000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:33.715944    6031 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:33.716068    6031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:33.716071    6031 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:33.716074    6031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:33.716225    6031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:33.717339    6031 out.go:352] Setting JSON to false
	I0926 18:10:33.733729    6031 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4196,"bootTime":1727395237,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:10:33.733804    6031 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:10:33.738698    6031 out.go:177] * [default-k8s-diff-port-369000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:10:33.745684    6031 notify.go:220] Checking for updates...
	I0926 18:10:33.750647    6031 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:10:33.757586    6031 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:10:33.765589    6031 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:10:33.772601    6031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:10:33.780663    6031 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:10:33.787558    6031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:10:33.791936    6031 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:33.791998    6031 config.go:182] Loaded profile config "no-preload-421000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:33.792040    6031 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:10:33.795595    6031 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:10:33.802613    6031 start.go:297] selected driver: qemu2
	I0926 18:10:33.802655    6031 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:10:33.802664    6031 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:10:33.804986    6031 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 18:10:33.808627    6031 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:10:33.812713    6031 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:10:33.812732    6031 cni.go:84] Creating CNI manager for ""
	I0926 18:10:33.812761    6031 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:10:33.812772    6031 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 18:10:33.812802    6031 start.go:340] cluster config:
	{Name:default-k8s-diff-port-369000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:33.816633    6031 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:33.823618    6031 out.go:177] * Starting "default-k8s-diff-port-369000" primary control-plane node in "default-k8s-diff-port-369000" cluster
	I0926 18:10:33.826619    6031 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:10:33.826634    6031 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:10:33.826645    6031 cache.go:56] Caching tarball of preloaded images
	I0926 18:10:33.826720    6031 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:10:33.826727    6031 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:10:33.826791    6031 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/default-k8s-diff-port-369000/config.json ...
	I0926 18:10:33.826803    6031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/default-k8s-diff-port-369000/config.json: {Name:mk1de409a2c76919bdcf3ee5afeda01bbb249a76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:10:33.827137    6031 start.go:360] acquireMachinesLock for default-k8s-diff-port-369000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:33.827175    6031 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "default-k8s-diff-port-369000"
	I0926 18:10:33.827188    6031 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:10:33.827229    6031 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:10:33.835615    6031 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:10:33.852487    6031 start.go:159] libmachine.API.Create for "default-k8s-diff-port-369000" (driver="qemu2")
	I0926 18:10:33.852512    6031 client.go:168] LocalClient.Create starting
	I0926 18:10:33.852580    6031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:10:33.852609    6031 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:33.852617    6031 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:33.852652    6031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:10:33.852675    6031 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:33.852683    6031 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:33.853029    6031 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:10:34.027219    6031 main.go:141] libmachine: Creating SSH key...
	I0926 18:10:34.110601    6031 main.go:141] libmachine: Creating Disk image...
	I0926 18:10:34.110607    6031 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:10:34.110786    6031 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2
	I0926 18:10:34.120089    6031 main.go:141] libmachine: STDOUT: 
	I0926 18:10:34.120113    6031 main.go:141] libmachine: STDERR: 
	I0926 18:10:34.120169    6031 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2 +20000M
	I0926 18:10:34.128367    6031 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:10:34.128382    6031 main.go:141] libmachine: STDERR: 
	I0926 18:10:34.128401    6031 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2
	I0926 18:10:34.128412    6031 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:10:34.128423    6031 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:34.128448    6031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:23:2f:f1:a9:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2
	I0926 18:10:34.129994    6031 main.go:141] libmachine: STDOUT: 
	I0926 18:10:34.130009    6031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:34.130033    6031 client.go:171] duration metric: took 277.529459ms to LocalClient.Create
	I0926 18:10:36.132101    6031 start.go:128] duration metric: took 2.3049725s to createHost
	I0926 18:10:36.132178    6031 start.go:83] releasing machines lock for "default-k8s-diff-port-369000", held for 2.305115584s
	W0926 18:10:36.132311    6031 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:36.141620    6031 out.go:177] * Deleting "default-k8s-diff-port-369000" in qemu2 ...
	W0926 18:10:36.179415    6031 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:36.179442    6031 start.go:729] Will try again in 5 seconds ...
	I0926 18:10:41.181397    6031 start.go:360] acquireMachinesLock for default-k8s-diff-port-369000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:41.181626    6031 start.go:364] duration metric: took 169.583µs to acquireMachinesLock for "default-k8s-diff-port-369000"
	I0926 18:10:41.181718    6031 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:10:41.181913    6031 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:10:41.189282    6031 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:10:41.236073    6031 start.go:159] libmachine.API.Create for "default-k8s-diff-port-369000" (driver="qemu2")
	I0926 18:10:41.236148    6031 client.go:168] LocalClient.Create starting
	I0926 18:10:41.236257    6031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:10:41.236315    6031 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:41.236329    6031 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:41.236393    6031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:10:41.236422    6031 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:41.236437    6031 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:41.237003    6031 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:10:41.449604    6031 main.go:141] libmachine: Creating SSH key...
	I0926 18:10:41.498687    6031 main.go:141] libmachine: Creating Disk image...
	I0926 18:10:41.498692    6031 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:10:41.498896    6031 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2
	I0926 18:10:41.508026    6031 main.go:141] libmachine: STDOUT: 
	I0926 18:10:41.508047    6031 main.go:141] libmachine: STDERR: 
	I0926 18:10:41.508106    6031 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2 +20000M
	I0926 18:10:41.515914    6031 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:10:41.515967    6031 main.go:141] libmachine: STDERR: 
	I0926 18:10:41.515979    6031 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2
	I0926 18:10:41.515988    6031 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:10:41.515997    6031 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:41.516022    6031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:20:2b:6b:20:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2
	I0926 18:10:41.517663    6031 main.go:141] libmachine: STDOUT: 
	I0926 18:10:41.517678    6031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:41.517690    6031 client.go:171] duration metric: took 281.550417ms to LocalClient.Create
	I0926 18:10:43.519794    6031 start.go:128] duration metric: took 2.337977167s to createHost
	I0926 18:10:43.519884    6031 start.go:83] releasing machines lock for "default-k8s-diff-port-369000", held for 2.338364708s
	W0926 18:10:43.520155    6031 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-369000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-369000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:43.534745    6031 out.go:201] 
	W0926 18:10:43.542756    6031 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:43.542780    6031 out.go:270] * 
	* 
	W0926 18:10:43.545236    6031 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:10:43.558692    6031 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-369000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000: exit status 7 (67.092458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-421000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-421000 create -f testdata/busybox.yaml: exit status 1 (29.569542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-421000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-421000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (29.823458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (29.23175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-421000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-421000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-421000 describe deploy/metrics-server -n kube-system: exit status 1 (26.749875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-421000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-421000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (29.268791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.674701042s)

                                                
                                                
-- stdout --
	* [no-preload-421000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-421000" primary control-plane node in "no-preload-421000" cluster
	* Restarting existing qemu2 VM for "no-preload-421000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-421000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:42.968441    6082 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:42.968585    6082 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:42.968589    6082 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:42.968591    6082 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:42.968747    6082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:42.969774    6082 out.go:352] Setting JSON to false
	I0926 18:10:42.986221    6082 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4205,"bootTime":1727395237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:10:42.986286    6082 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:10:42.991679    6082 out.go:177] * [no-preload-421000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:10:42.999662    6082 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:10:42.999699    6082 notify.go:220] Checking for updates...
	I0926 18:10:43.007609    6082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:10:43.010636    6082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:10:43.014671    6082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:10:43.017713    6082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:10:43.020639    6082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:10:43.023995    6082 config.go:182] Loaded profile config "no-preload-421000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:43.024241    6082 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:10:43.028635    6082 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 18:10:43.035637    6082 start.go:297] selected driver: qemu2
	I0926 18:10:43.035643    6082 start.go:901] validating driver "qemu2" against &{Name:no-preload-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:43.035721    6082 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:10:43.038299    6082 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:10:43.038325    6082 cni.go:84] Creating CNI manager for ""
	I0926 18:10:43.038352    6082 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:10:43.038379    6082 start.go:340] cluster config:
	{Name:no-preload-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-421000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:43.042349    6082 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:43.050655    6082 out.go:177] * Starting "no-preload-421000" primary control-plane node in "no-preload-421000" cluster
	I0926 18:10:43.054506    6082 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:10:43.054585    6082 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/no-preload-421000/config.json ...
	I0926 18:10:43.054610    6082 cache.go:107] acquiring lock: {Name:mka2794e14c3d83963291f7ccf8a15aef76e08bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:43.054617    6082 cache.go:107] acquiring lock: {Name:mka191bab5daac44613d53489a541ed562ed2e7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:43.054620    6082 cache.go:107] acquiring lock: {Name:mk9fe0dc2128d7589ccdf16b00551b774f1e3ad0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:43.054675    6082 cache.go:115] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0926 18:10:43.054681    6082 cache.go:115] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0926 18:10:43.054682    6082 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 84.084µs
	I0926 18:10:43.054690    6082 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 90.792µs
	I0926 18:10:43.054712    6082 cache.go:107] acquiring lock: {Name:mk34516a2cdcac63bb9f33dd4f6d722e48075ab5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:43.054730    6082 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0926 18:10:43.054695    6082 cache.go:115] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0926 18:10:43.054745    6082 cache.go:107] acquiring lock: {Name:mk8b39772f709d469d2f3a2067788c1438bbdefc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:43.054760    6082 cache.go:115] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0926 18:10:43.054696    6082 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0926 18:10:43.054703    6082 cache.go:107] acquiring lock: {Name:mk39e1ef9abbd9afe643b5af5519125f91230536 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:43.054770    6082 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 170.167µs
	I0926 18:10:43.054796    6082 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0926 18:10:43.054758    6082 cache.go:107] acquiring lock: {Name:mkbb520ce013d82b322bcf16acf008c83bc86f05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:43.054813    6082 cache.go:115] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0926 18:10:43.054819    6082 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 116.291µs
	I0926 18:10:43.054823    6082 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0926 18:10:43.054707    6082 cache.go:107] acquiring lock: {Name:mk63edc18738ae22f0822a069a886319205bbb36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:43.054769    6082 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 56.125µs
	I0926 18:10:43.054853    6082 cache.go:115] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0926 18:10:43.054798    6082 cache.go:115] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0926 18:10:43.054859    6082 cache.go:115] /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0926 18:10:43.054863    6082 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 160.583µs
	I0926 18:10:43.054870    6082 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0926 18:10:43.054863    6082 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 136.584µs
	I0926 18:10:43.054873    6082 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0926 18:10:43.054858    6082 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 125.375µs
	I0926 18:10:43.054876    6082 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0926 18:10:43.054854    6082 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0926 18:10:43.054879    6082 cache.go:87] Successfully saved all images to host disk.
	I0926 18:10:43.055033    6082 start.go:360] acquireMachinesLock for no-preload-421000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:43.519987    6082 start.go:364] duration metric: took 464.948833ms to acquireMachinesLock for "no-preload-421000"
	I0926 18:10:43.520161    6082 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:10:43.520182    6082 fix.go:54] fixHost starting: 
	I0926 18:10:43.520862    6082 fix.go:112] recreateIfNeeded on no-preload-421000: state=Stopped err=<nil>
	W0926 18:10:43.520919    6082 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:10:43.538585    6082 out.go:177] * Restarting existing qemu2 VM for "no-preload-421000" ...
	I0926 18:10:43.546752    6082 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:43.546940    6082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:fe:7d:6f:2d:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2
	I0926 18:10:43.556864    6082 main.go:141] libmachine: STDOUT: 
	I0926 18:10:43.556940    6082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:43.557057    6082 fix.go:56] duration metric: took 36.863916ms for fixHost
	I0926 18:10:43.557072    6082 start.go:83] releasing machines lock for "no-preload-421000", held for 37.062417ms
	W0926 18:10:43.557113    6082 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:43.557260    6082 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:43.557281    6082 start.go:729] Will try again in 5 seconds ...
	I0926 18:10:48.559228    6082 start.go:360] acquireMachinesLock for no-preload-421000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:48.559696    6082 start.go:364] duration metric: took 372.5µs to acquireMachinesLock for "no-preload-421000"
	I0926 18:10:48.559808    6082 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:10:48.559828    6082 fix.go:54] fixHost starting: 
	I0926 18:10:48.560809    6082 fix.go:112] recreateIfNeeded on no-preload-421000: state=Stopped err=<nil>
	W0926 18:10:48.560840    6082 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:10:48.566334    6082 out.go:177] * Restarting existing qemu2 VM for "no-preload-421000" ...
	I0926 18:10:48.570313    6082 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:48.570486    6082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:fe:7d:6f:2d:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/no-preload-421000/disk.qcow2
	I0926 18:10:48.579834    6082 main.go:141] libmachine: STDOUT: 
	I0926 18:10:48.579910    6082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:48.579977    6082 fix.go:56] duration metric: took 20.147375ms for fixHost
	I0926 18:10:48.579990    6082 start.go:83] releasing machines lock for "no-preload-421000", held for 20.273834ms
	W0926 18:10:48.580195    6082 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-421000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-421000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:48.588263    6082 out.go:201] 
	W0926 18:10:48.591322    6082 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:48.591338    6082 out.go:270] * 
	* 
	W0926 18:10:48.593592    6082 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:10:48.602248    6082 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-421000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (66.185042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-369000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-369000 create -f testdata/busybox.yaml: exit status 1 (30.051917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-369000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-369000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000: exit status 7 (29.438083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-369000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000: exit status 7 (29.135167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-369000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-369000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-369000 describe deploy/metrics-server -n kube-system: exit status 1 (26.836291ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-369000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-369000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000: exit status 7 (29.609417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-369000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
E0926 18:10:48.115633    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-369000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.588590375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-369000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-369000" primary control-plane node in "default-k8s-diff-port-369000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-369000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:46.121117    6120 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:46.121251    6120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:46.121254    6120 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:46.121257    6120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:46.121394    6120 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:46.122429    6120 out.go:352] Setting JSON to false
	I0926 18:10:46.138567    6120 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4209,"bootTime":1727395237,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:10:46.138636    6120 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:10:46.143728    6120 out.go:177] * [default-k8s-diff-port-369000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:10:46.150636    6120 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:10:46.150694    6120 notify.go:220] Checking for updates...
	I0926 18:10:46.158538    6120 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:10:46.161610    6120 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:10:46.164653    6120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:10:46.167606    6120 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:10:46.170608    6120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:10:46.173931    6120 config.go:182] Loaded profile config "default-k8s-diff-port-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:46.174191    6120 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:10:46.178553    6120 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 18:10:46.185657    6120 start.go:297] selected driver: qemu2
	I0926 18:10:46.185664    6120 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-369000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:46.185725    6120 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:10:46.188100    6120 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0926 18:10:46.188126    6120 cni.go:84] Creating CNI manager for ""
	I0926 18:10:46.188148    6120 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:10:46.188173    6120 start.go:340] cluster config:
	{Name:default-k8s-diff-port-369000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-369000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:46.191856    6120 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:46.200619    6120 out.go:177] * Starting "default-k8s-diff-port-369000" primary control-plane node in "default-k8s-diff-port-369000" cluster
	I0926 18:10:46.204594    6120 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:10:46.204608    6120 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:10:46.204618    6120 cache.go:56] Caching tarball of preloaded images
	I0926 18:10:46.204674    6120 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:10:46.204680    6120 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:10:46.204734    6120 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/default-k8s-diff-port-369000/config.json ...
	I0926 18:10:46.205243    6120 start.go:360] acquireMachinesLock for default-k8s-diff-port-369000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:46.205275    6120 start.go:364] duration metric: took 24.791µs to acquireMachinesLock for "default-k8s-diff-port-369000"
	I0926 18:10:46.205284    6120 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:10:46.205289    6120 fix.go:54] fixHost starting: 
	I0926 18:10:46.205423    6120 fix.go:112] recreateIfNeeded on default-k8s-diff-port-369000: state=Stopped err=<nil>
	W0926 18:10:46.205434    6120 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:10:46.208684    6120 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-369000" ...
	I0926 18:10:46.216610    6120 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:46.216657    6120 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:20:2b:6b:20:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2
	I0926 18:10:46.218636    6120 main.go:141] libmachine: STDOUT: 
	I0926 18:10:46.218654    6120 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:46.218683    6120 fix.go:56] duration metric: took 13.393125ms for fixHost
	I0926 18:10:46.218688    6120 start.go:83] releasing machines lock for "default-k8s-diff-port-369000", held for 13.40925ms
	W0926 18:10:46.218694    6120 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:46.218728    6120 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:46.218732    6120 start.go:729] Will try again in 5 seconds ...
	I0926 18:10:51.219033    6120 start.go:360] acquireMachinesLock for default-k8s-diff-port-369000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:51.601388    6120 start.go:364] duration metric: took 382.215667ms to acquireMachinesLock for "default-k8s-diff-port-369000"
	I0926 18:10:51.601493    6120 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:10:51.601512    6120 fix.go:54] fixHost starting: 
	I0926 18:10:51.602350    6120 fix.go:112] recreateIfNeeded on default-k8s-diff-port-369000: state=Stopped err=<nil>
	W0926 18:10:51.602383    6120 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:10:51.607887    6120 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-369000" ...
	I0926 18:10:51.632836    6120 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:51.633083    6120 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:20:2b:6b:20:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/default-k8s-diff-port-369000/disk.qcow2
	I0926 18:10:51.643026    6120 main.go:141] libmachine: STDOUT: 
	I0926 18:10:51.643107    6120 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:51.643196    6120 fix.go:56] duration metric: took 41.685625ms for fixHost
	I0926 18:10:51.643219    6120 start.go:83] releasing machines lock for "default-k8s-diff-port-369000", held for 41.770125ms
	W0926 18:10:51.643419    6120 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-369000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:51.651780    6120 out.go:201] 
	W0926 18:10:51.653744    6120 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:51.653770    6120 out.go:270] * 
	* 
	W0926 18:10:51.655671    6120 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:10:51.666776    6120 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-369000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000: exit status 7 (62.986709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-421000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (31.861792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-421000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-421000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-421000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.440375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-421000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-421000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (29.16375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-421000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (28.479291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-421000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-421000 --alsologtostderr -v=1: exit status 83 (39.510542ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-421000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-421000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:48.868244    6139 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:48.868394    6139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:48.868397    6139 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:48.868399    6139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:48.868524    6139 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:48.868741    6139 out.go:352] Setting JSON to false
	I0926 18:10:48.868749    6139 mustload.go:65] Loading cluster: no-preload-421000
	I0926 18:10:48.868990    6139 config.go:182] Loaded profile config "no-preload-421000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:48.872831    6139 out.go:177] * The control-plane node no-preload-421000 host is not running: state=Stopped
	I0926 18:10:48.875579    6139 out.go:177]   To start a cluster, run: "minikube start -p no-preload-421000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-421000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (28.374125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (29.456417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-620000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-620000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.114090875s)

                                                
                                                
-- stdout --
	* [newest-cni-620000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-620000" primary control-plane node in "newest-cni-620000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-620000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:49.187576    6156 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:49.187703    6156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:49.187706    6156 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:49.187709    6156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:49.187849    6156 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:49.188940    6156 out.go:352] Setting JSON to false
	I0926 18:10:49.205185    6156 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4212,"bootTime":1727395237,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:10:49.205272    6156 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:10:49.209703    6156 out.go:177] * [newest-cni-620000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:10:49.216621    6156 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:10:49.216653    6156 notify.go:220] Checking for updates...
	I0926 18:10:49.222626    6156 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:10:49.225556    6156 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:10:49.228588    6156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:10:49.231568    6156 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:10:49.234535    6156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:10:49.237964    6156 config.go:182] Loaded profile config "default-k8s-diff-port-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:49.238023    6156 config.go:182] Loaded profile config "multinode-587000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:49.238073    6156 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:10:49.242532    6156 out.go:177] * Using the qemu2 driver based on user configuration
	I0926 18:10:49.249559    6156 start.go:297] selected driver: qemu2
	I0926 18:10:49.249566    6156 start.go:901] validating driver "qemu2" against <nil>
	I0926 18:10:49.249571    6156 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:10:49.251851    6156 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0926 18:10:49.251887    6156 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0926 18:10:49.260572    6156 out.go:177] * Automatically selected the socket_vmnet network
	I0926 18:10:49.263660    6156 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0926 18:10:49.263691    6156 cni.go:84] Creating CNI manager for ""
	I0926 18:10:49.263713    6156 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:10:49.263718    6156 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 18:10:49.263745    6156 start.go:340] cluster config:
	{Name:newest-cni-620000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:10:49.267426    6156 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:10:49.274657    6156 out.go:177] * Starting "newest-cni-620000" primary control-plane node in "newest-cni-620000" cluster
	I0926 18:10:49.278439    6156 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:10:49.278454    6156 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:10:49.278467    6156 cache.go:56] Caching tarball of preloaded images
	I0926 18:10:49.278538    6156 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:10:49.278544    6156 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:10:49.278615    6156 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/newest-cni-620000/config.json ...
	I0926 18:10:49.278634    6156 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/newest-cni-620000/config.json: {Name:mk0052f81b06a9d28158cd10402b7688e775e6f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 18:10:49.278876    6156 start.go:360] acquireMachinesLock for newest-cni-620000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:49.278911    6156 start.go:364] duration metric: took 29.583µs to acquireMachinesLock for "newest-cni-620000"
	I0926 18:10:49.278924    6156 start.go:93] Provisioning new machine with config: &{Name:newest-cni-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:10:49.278965    6156 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:10:49.287438    6156 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:10:49.305696    6156 start.go:159] libmachine.API.Create for "newest-cni-620000" (driver="qemu2")
	I0926 18:10:49.305725    6156 client.go:168] LocalClient.Create starting
	I0926 18:10:49.305792    6156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:10:49.305825    6156 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:49.305835    6156 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:49.305883    6156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:10:49.305907    6156 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:49.305914    6156 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:49.306367    6156 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:10:49.472859    6156 main.go:141] libmachine: Creating SSH key...
	I0926 18:10:49.579938    6156 main.go:141] libmachine: Creating Disk image...
	I0926 18:10:49.579944    6156 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:10:49.580126    6156 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2
	I0926 18:10:49.589296    6156 main.go:141] libmachine: STDOUT: 
	I0926 18:10:49.589324    6156 main.go:141] libmachine: STDERR: 
	I0926 18:10:49.589386    6156 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2 +20000M
	I0926 18:10:49.597296    6156 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:10:49.597311    6156 main.go:141] libmachine: STDERR: 
	I0926 18:10:49.597329    6156 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2
	I0926 18:10:49.597333    6156 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:10:49.597343    6156 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:49.597374    6156 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:8e:d6:39:fd:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2
	I0926 18:10:49.599012    6156 main.go:141] libmachine: STDOUT: 
	I0926 18:10:49.599028    6156 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:49.599050    6156 client.go:171] duration metric: took 293.335209ms to LocalClient.Create
	I0926 18:10:51.601105    6156 start.go:128] duration metric: took 2.322246959s to createHost
	I0926 18:10:51.601166    6156 start.go:83] releasing machines lock for "newest-cni-620000", held for 2.322368958s
	W0926 18:10:51.601265    6156 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:51.628668    6156 out.go:177] * Deleting "newest-cni-620000" in qemu2 ...
	W0926 18:10:51.692765    6156 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:51.692811    6156 start.go:729] Will try again in 5 seconds ...
	I0926 18:10:56.694834    6156 start.go:360] acquireMachinesLock for newest-cni-620000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:10:56.695333    6156 start.go:364] duration metric: took 385.542µs to acquireMachinesLock for "newest-cni-620000"
	I0926 18:10:56.695458    6156 start.go:93] Provisioning new machine with config: &{Name:newest-cni-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0926 18:10:56.695726    6156 start.go:125] createHost starting for "" (driver="qemu2")
	I0926 18:10:56.700472    6156 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0926 18:10:56.752433    6156 start.go:159] libmachine.API.Create for "newest-cni-620000" (driver="qemu2")
	I0926 18:10:56.752497    6156 client.go:168] LocalClient.Create starting
	I0926 18:10:56.752625    6156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/ca.pem
	I0926 18:10:56.752690    6156 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:56.752708    6156 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:56.752773    6156 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19711-1075/.minikube/certs/cert.pem
	I0926 18:10:56.752818    6156 main.go:141] libmachine: Decoding PEM data...
	I0926 18:10:56.752834    6156 main.go:141] libmachine: Parsing certificate...
	I0926 18:10:56.753396    6156 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I0926 18:10:56.924687    6156 main.go:141] libmachine: Creating SSH key...
	I0926 18:10:57.207358    6156 main.go:141] libmachine: Creating Disk image...
	I0926 18:10:57.207371    6156 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0926 18:10:57.207559    6156 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2.raw /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2
	I0926 18:10:57.216818    6156 main.go:141] libmachine: STDOUT: 
	I0926 18:10:57.216844    6156 main.go:141] libmachine: STDERR: 
	I0926 18:10:57.216912    6156 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2 +20000M
	I0926 18:10:57.224881    6156 main.go:141] libmachine: STDOUT: Image resized.
	
	I0926 18:10:57.224907    6156 main.go:141] libmachine: STDERR: 
	I0926 18:10:57.224925    6156 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2
	I0926 18:10:57.224935    6156 main.go:141] libmachine: Starting QEMU VM...
	I0926 18:10:57.224941    6156 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:10:57.224979    6156 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6c:7c:e9:94:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2
	I0926 18:10:57.226613    6156 main.go:141] libmachine: STDOUT: 
	I0926 18:10:57.226628    6156 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:10:57.226643    6156 client.go:171] duration metric: took 474.166125ms to LocalClient.Create
	I0926 18:10:59.228823    6156 start.go:128] duration metric: took 2.533151458s to createHost
	I0926 18:10:59.228921    6156 start.go:83] releasing machines lock for "newest-cni-620000", held for 2.5336965s
	W0926 18:10:59.229383    6156 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-620000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-620000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:10:59.236629    6156 out.go:201] 
	W0926 18:10:59.248789    6156 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:10:59.248854    6156 out.go:270] * 
	* 
	W0926 18:10:59.251268    6156 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:10:59.264614    6156 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-620000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000: exit status 7 (68.069542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-620000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-369000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000: exit status 7 (32.774958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-369000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-369000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-369000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.703083ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-369000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-369000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000: exit status 7 (29.222042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-369000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000: exit status 7 (29.632834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-369000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-369000 --alsologtostderr -v=1: exit status 83 (51.964375ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-369000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-369000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:10:51.933230    6178 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:10:51.933397    6178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:51.933400    6178 out.go:358] Setting ErrFile to fd 2...
	I0926 18:10:51.933403    6178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:10:51.933533    6178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:10:51.933736    6178 out.go:352] Setting JSON to false
	I0926 18:10:51.933743    6178 mustload.go:65] Loading cluster: default-k8s-diff-port-369000
	I0926 18:10:51.933962    6178 config.go:182] Loaded profile config "default-k8s-diff-port-369000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:10:51.938734    6178 out.go:177] * The control-plane node default-k8s-diff-port-369000 host is not running: state=Stopped
	I0926 18:10:51.951998    6178 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-369000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-369000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000: exit status 7 (28.3925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-369000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000: exit status 7 (29.138125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-620000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-620000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.181625667s)

                                                
                                                
-- stdout --
	* [newest-cni-620000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-620000" primary control-plane node in "newest-cni-620000" cluster
	* Restarting existing qemu2 VM for "newest-cni-620000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-620000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:11:02.903368    6226 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:11:02.903490    6226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:11:02.903499    6226 out.go:358] Setting ErrFile to fd 2...
	I0926 18:11:02.903504    6226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:11:02.903640    6226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:11:02.904604    6226 out.go:352] Setting JSON to false
	I0926 18:11:02.920836    6226 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4225,"bootTime":1727395237,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 18:11:02.920938    6226 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 18:11:02.924944    6226 out.go:177] * [newest-cni-620000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 18:11:02.931790    6226 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 18:11:02.931818    6226 notify.go:220] Checking for updates...
	I0926 18:11:02.939886    6226 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 18:11:02.942891    6226 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 18:11:02.945891    6226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 18:11:02.948911    6226 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 18:11:02.950441    6226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 18:11:02.954229    6226 config.go:182] Loaded profile config "newest-cni-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:11:02.954484    6226 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 18:11:02.958871    6226 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 18:11:02.964024    6226 start.go:297] selected driver: qemu2
	I0926 18:11:02.964030    6226 start.go:901] validating driver "qemu2" against &{Name:newest-cni-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:11:02.964120    6226 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 18:11:02.966581    6226 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0926 18:11:02.966610    6226 cni.go:84] Creating CNI manager for ""
	I0926 18:11:02.966632    6226 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 18:11:02.966654    6226 start.go:340] cluster config:
	{Name:newest-cni-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-620000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 18:11:02.970154    6226 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 18:11:02.978894    6226 out.go:177] * Starting "newest-cni-620000" primary control-plane node in "newest-cni-620000" cluster
	I0926 18:11:02.982917    6226 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 18:11:02.982933    6226 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 18:11:02.982943    6226 cache.go:56] Caching tarball of preloaded images
	I0926 18:11:02.983020    6226 preload.go:172] Found /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0926 18:11:02.983026    6226 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0926 18:11:02.983092    6226 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/newest-cni-620000/config.json ...
	I0926 18:11:02.983633    6226 start.go:360] acquireMachinesLock for newest-cni-620000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:11:02.983663    6226 start.go:364] duration metric: took 23.792µs to acquireMachinesLock for "newest-cni-620000"
	I0926 18:11:02.983672    6226 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:11:02.983678    6226 fix.go:54] fixHost starting: 
	I0926 18:11:02.983806    6226 fix.go:112] recreateIfNeeded on newest-cni-620000: state=Stopped err=<nil>
	W0926 18:11:02.983815    6226 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:11:02.987900    6226 out.go:177] * Restarting existing qemu2 VM for "newest-cni-620000" ...
	I0926 18:11:02.995757    6226 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:11:02.995797    6226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6c:7c:e9:94:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2
	I0926 18:11:02.997898    6226 main.go:141] libmachine: STDOUT: 
	I0926 18:11:02.997914    6226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:11:02.997943    6226 fix.go:56] duration metric: took 14.265334ms for fixHost
	I0926 18:11:02.997948    6226 start.go:83] releasing machines lock for "newest-cni-620000", held for 14.281209ms
	W0926 18:11:02.997954    6226 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:11:02.997989    6226 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:11:02.997994    6226 start.go:729] Will try again in 5 seconds ...
	I0926 18:11:07.999916    6226 start.go:360] acquireMachinesLock for newest-cni-620000: {Name:mk4180634b944e0bf25d258156eee8386d5516ae Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0926 18:11:08.000307    6226 start.go:364] duration metric: took 312.791µs to acquireMachinesLock for "newest-cni-620000"
	I0926 18:11:08.000422    6226 start.go:96] Skipping create...Using existing machine configuration
	I0926 18:11:08.000443    6226 fix.go:54] fixHost starting: 
	I0926 18:11:08.001115    6226 fix.go:112] recreateIfNeeded on newest-cni-620000: state=Stopped err=<nil>
	W0926 18:11:08.001148    6226 fix.go:138] unexpected machine state, will restart: <nil>
	I0926 18:11:08.009524    6226 out.go:177] * Restarting existing qemu2 VM for "newest-cni-620000" ...
	I0926 18:11:08.013487    6226 qemu.go:418] Using hvf for hardware acceleration
	I0926 18:11:08.013769    6226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6c:7c:e9:94:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19711-1075/.minikube/machines/newest-cni-620000/disk.qcow2
	I0926 18:11:08.022834    6226 main.go:141] libmachine: STDOUT: 
	I0926 18:11:08.022889    6226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0926 18:11:08.022954    6226 fix.go:56] duration metric: took 22.510208ms for fixHost
	I0926 18:11:08.022973    6226 start.go:83] releasing machines lock for "newest-cni-620000", held for 22.642458ms
	W0926 18:11:08.023221    6226 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-620000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-620000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0926 18:11:08.028656    6226 out.go:201] 
	W0926 18:11:08.032594    6226 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0926 18:11:08.032619    6226 out.go:270] * 
	* 
	W0926 18:11:08.035173    6226 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 18:11:08.043523    6226 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-620000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000: exit status 7 (68.126417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-620000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-620000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000: exit status 7 (30.982875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-620000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-620000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-620000 --alsologtostderr -v=1: exit status 83 (39.615541ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-620000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-620000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 18:11:08.227342    6240 out.go:345] Setting OutFile to fd 1 ...
	I0926 18:11:08.227478    6240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:11:08.227482    6240 out.go:358] Setting ErrFile to fd 2...
	I0926 18:11:08.227485    6240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 18:11:08.227603    6240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 18:11:08.227817    6240 out.go:352] Setting JSON to false
	I0926 18:11:08.227824    6240 mustload.go:65] Loading cluster: newest-cni-620000
	I0926 18:11:08.228045    6240 config.go:182] Loaded profile config "newest-cni-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 18:11:08.231254    6240 out.go:177] * The control-plane node newest-cni-620000 host is not running: state=Stopped
	I0926 18:11:08.235048    6240 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-620000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-620000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000: exit status 7 (29.589833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-620000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000: exit status 7 (30.187916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-620000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 7.56
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.1
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.38
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 136.84
29 TestAddons/serial/Volcano 39.9
31 TestAddons/serial/GCPAuth/Namespaces 0.08
34 TestAddons/parallel/Ingress 17.5
35 TestAddons/parallel/InspektorGadget 10.44
36 TestAddons/parallel/MetricsServer 5.25
38 TestAddons/parallel/CSI 60.57
39 TestAddons/parallel/Headlamp 18.63
40 TestAddons/parallel/CloudSpanner 5.21
41 TestAddons/parallel/LocalPath 9.61
42 TestAddons/parallel/NvidiaDevicePlugin 5.19
43 TestAddons/parallel/Yakd 10.29
44 TestAddons/StoppedEnableDisable 12.4
52 TestHyperKitDriverInstallOrUpdate 11.11
55 TestErrorSpam/setup 34.9
56 TestErrorSpam/start 0.34
57 TestErrorSpam/status 0.23
58 TestErrorSpam/pause 0.64
59 TestErrorSpam/unpause 0.59
60 TestErrorSpam/stop 55.27
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 74.99
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 42.25
67 TestFunctional/serial/KubeContext 0.03
68 TestFunctional/serial/KubectlGetPods 0.05
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.78
72 TestFunctional/serial/CacheCmd/cache/add_local 1.69
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
74 TestFunctional/serial/CacheCmd/cache/list 0.03
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
76 TestFunctional/serial/CacheCmd/cache/cache_reload 0.7
77 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/serial/MinikubeKubectlCmd 0.7
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.01
80 TestFunctional/serial/ExtraConfig 280.19
81 TestFunctional/serial/ComponentHealth 0.05
82 TestFunctional/serial/LogsCmd 0.56
83 TestFunctional/serial/LogsFileCmd 0.55
84 TestFunctional/serial/InvalidService 4.21
86 TestFunctional/parallel/ConfigCmd 0.23
87 TestFunctional/parallel/DashboardCmd 12.37
88 TestFunctional/parallel/DryRun 0.22
89 TestFunctional/parallel/InternationalLanguage 0.11
90 TestFunctional/parallel/StatusCmd 0.25
95 TestFunctional/parallel/AddonsCmd 0.1
96 TestFunctional/parallel/PersistentVolumeClaim 24.46
98 TestFunctional/parallel/SSHCmd 0.13
99 TestFunctional/parallel/CpCmd 0.44
101 TestFunctional/parallel/FileSync 0.07
102 TestFunctional/parallel/CertSync 0.41
106 TestFunctional/parallel/NodeLabels 0.04
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
110 TestFunctional/parallel/License 0.41
111 TestFunctional/parallel/Version/short 0.04
112 TestFunctional/parallel/Version/components 0.15
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
117 TestFunctional/parallel/ImageCommands/ImageBuild 1.82
118 TestFunctional/parallel/ImageCommands/Setup 1.81
119 TestFunctional/parallel/DockerEnv/bash 0.33
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
123 TestFunctional/parallel/ServiceCmd/DeployApp 12.09
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.46
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.38
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.17
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.18
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.17
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.11
136 TestFunctional/parallel/ServiceCmd/List 0.13
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
139 TestFunctional/parallel/ServiceCmd/Format 0.1
140 TestFunctional/parallel/ServiceCmd/URL 0.1
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
148 TestFunctional/parallel/ProfileCmd/profile_list 0.13
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.13
150 TestFunctional/parallel/MountCmd/any-port 5.87
151 TestFunctional/parallel/MountCmd/specific-port 1.09
152 TestFunctional/parallel/MountCmd/VerifyCleanup 0.8
153 TestFunctional/delete_echo-server_images 0.07
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 174.58
160 TestMultiControlPlane/serial/DeployApp 5.33
161 TestMultiControlPlane/serial/PingHostFromPods 0.73
162 TestMultiControlPlane/serial/AddWorkerNode 57.07
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.29
165 TestMultiControlPlane/serial/CopyFile 4.1
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 3.34
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.03
258 TestStoppedBinaryUpgrade/Setup 1.1
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
275 TestNoKubernetes/serial/ProfileList 31.52
276 TestNoKubernetes/serial/Stop 3.41
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
288 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
293 TestStartStop/group/old-k8s-version/serial/Stop 1.9
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
304 TestStartStop/group/embed-certs/serial/Stop 1.92
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.11
317 TestStartStop/group/no-preload/serial/Stop 2.02
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.12
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
337 TestStartStop/group/newest-cni/serial/Stop 3.35
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0926 17:14:09.503923    1597 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0926 17:14:09.504258    1597 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-085000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-085000: exit status 85 (92.964667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-085000 | jenkins | v1.34.0 | 26 Sep 24 17:13 PDT |          |
	|         | -p download-only-085000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 17:13:50
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 17:13:50.493756    1598 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:13:50.493900    1598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:13:50.493903    1598 out.go:358] Setting ErrFile to fd 2...
	I0926 17:13:50.493906    1598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:13:50.494042    1598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	W0926 17:13:50.494134    1598 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19711-1075/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19711-1075/.minikube/config/config.json: no such file or directory
	I0926 17:13:50.495413    1598 out.go:352] Setting JSON to true
	I0926 17:13:50.512795    1598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":793,"bootTime":1727395237,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:13:50.512856    1598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:13:50.517397    1598 out.go:97] [download-only-085000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:13:50.517530    1598 notify.go:220] Checking for updates...
	W0926 17:13:50.517580    1598 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball: no such file or directory
	I0926 17:13:50.520281    1598 out.go:169] MINIKUBE_LOCATION=19711
	I0926 17:13:50.523329    1598 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:13:50.527353    1598 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:13:50.530332    1598 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:13:50.533363    1598 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	W0926 17:13:50.537845    1598 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 17:13:50.538122    1598 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:13:50.543363    1598 out.go:97] Using the qemu2 driver based on user configuration
	I0926 17:13:50.543383    1598 start.go:297] selected driver: qemu2
	I0926 17:13:50.543397    1598 start.go:901] validating driver "qemu2" against <nil>
	I0926 17:13:50.543485    1598 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:13:50.546284    1598 out.go:169] Automatically selected the socket_vmnet network
	I0926 17:13:50.552005    1598 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0926 17:13:50.552088    1598 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 17:13:50.552138    1598 cni.go:84] Creating CNI manager for ""
	I0926 17:13:50.552169    1598 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0926 17:13:50.552214    1598 start.go:340] cluster config:
	{Name:download-only-085000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-085000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:13:50.557364    1598 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:13:50.561320    1598 out.go:97] Downloading VM boot image ...
	I0926 17:13:50.561334    1598 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I0926 17:13:59.943400    1598 out.go:97] Starting "download-only-085000" primary control-plane node in "download-only-085000" cluster
	I0926 17:13:59.943425    1598 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 17:14:00.007195    1598 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0926 17:14:00.007217    1598 cache.go:56] Caching tarball of preloaded images
	I0926 17:14:00.007405    1598 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 17:14:00.012500    1598 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0926 17:14:00.012507    1598 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0926 17:14:00.094043    1598 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0926 17:14:07.930946    1598 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0926 17:14:07.931123    1598 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0926 17:14:08.628519    1598 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0926 17:14:08.628733    1598 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/download-only-085000/config.json ...
	I0926 17:14:08.628750    1598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/download-only-085000/config.json: {Name:mk4ef8888d5b58bf059454514e2a764f50e81632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0926 17:14:08.629002    1598 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0926 17:14:08.629194    1598 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0926 17:14:09.452306    1598 out.go:193] 
	W0926 17:14:09.460315    1598 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19711-1075/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0 0x108ff96c0] Decompressors:map[bz2:0x140004871d0 gz:0x140004871d8 tar:0x14000487180 tar.bz2:0x14000487190 tar.gz:0x140004871a0 tar.xz:0x140004871b0 tar.zst:0x140004871c0 tbz2:0x14000487190 tgz:0x140004871a0 txz:0x140004871b0 tzst:0x140004871c0 xz:0x140004871e0 zip:0x140004871f0 zst:0x140004871e8] Getters:map[file:0x1400078a6f0 http:0x140001520a0 https:0x14000152230] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0926 17:14:09.460342    1598 out_reason.go:110] 
	W0926 17:14:09.470306    1598 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0926 17:14:09.474215    1598 out.go:193] 
	
	
	* The control-plane node download-only-085000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-085000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-085000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-769000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-769000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (7.561352666s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0926 17:14:17.409160    1597 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0926 17:14:17.409202    1597 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-769000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-769000: exit status 85 (71.133291ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-085000 | jenkins | v1.34.0 | 26 Sep 24 17:13 PDT |                     |
	|         | -p download-only-085000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| delete  | -p download-only-085000        | download-only-085000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT | 26 Sep 24 17:14 PDT |
	| start   | -o=json --download-only        | download-only-769000 | jenkins | v1.34.0 | 26 Sep 24 17:14 PDT |                     |
	|         | -p download-only-769000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/26 17:14:09
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0926 17:14:09.874929    1625 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:14:09.875052    1625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:14:09.875056    1625 out.go:358] Setting ErrFile to fd 2...
	I0926 17:14:09.875059    1625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:14:09.875209    1625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:14:09.876331    1625 out.go:352] Setting JSON to true
	I0926 17:14:09.892460    1625 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":812,"bootTime":1727395237,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:14:09.892518    1625 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:14:09.897171    1625 out.go:97] [download-only-769000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:14:09.897295    1625 notify.go:220] Checking for updates...
	I0926 17:14:09.901021    1625 out.go:169] MINIKUBE_LOCATION=19711
	I0926 17:14:09.904029    1625 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:14:09.908045    1625 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:14:09.911013    1625 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:14:09.914050    1625 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	W0926 17:14:09.919990    1625 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0926 17:14:09.920171    1625 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:14:09.923015    1625 out.go:97] Using the qemu2 driver based on user configuration
	I0926 17:14:09.923025    1625 start.go:297] selected driver: qemu2
	I0926 17:14:09.923029    1625 start.go:901] validating driver "qemu2" against <nil>
	I0926 17:14:09.923081    1625 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0926 17:14:09.926018    1625 out.go:169] Automatically selected the socket_vmnet network
	I0926 17:14:09.931108    1625 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0926 17:14:09.931190    1625 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0926 17:14:09.931210    1625 cni.go:84] Creating CNI manager for ""
	I0926 17:14:09.931241    1625 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0926 17:14:09.931250    1625 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0926 17:14:09.931302    1625 start.go:340] cluster config:
	{Name:download-only-769000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-769000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:14:09.934616    1625 iso.go:125] acquiring lock: {Name:mk5bc1da5dc6eb3da72d129b802fb50227986db1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0926 17:14:09.938096    1625 out.go:97] Starting "download-only-769000" primary control-plane node in "download-only-769000" cluster
	I0926 17:14:09.938102    1625 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:14:09.998916    1625 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0926 17:14:09.998938    1625 cache.go:56] Caching tarball of preloaded images
	I0926 17:14:09.999120    1625 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0926 17:14:10.011849    1625 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0926 17:14:10.011857    1625 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0926 17:14:10.093847    1625 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19711-1075/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-769000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-769000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-769000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.38s)

                                                
                                                
=== RUN   TestBinaryMirror
I0926 17:14:17.886131    1597 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-534000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-534000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-534000
--- PASS: TestBinaryMirror (0.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-514000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-514000: exit status 85 (59.642625ms)

                                                
                                                
-- stdout --
	* Profile "addons-514000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-514000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-514000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-514000: exit status 85 (55.819792ms)

                                                
                                                
-- stdout --
	* Profile "addons-514000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-514000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (136.84s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-514000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-514000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m16.8407515s)
--- PASS: TestAddons/Setup (136.84s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.9s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 8.14375ms
addons_test.go:835: volcano-scheduler stabilized in 8.152667ms
addons_test.go:851: volcano-controller stabilized in 8.196625ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-jctr2" [38576448-44bf-4d59-b2cd-2fc864fa2c7d] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.008780958s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-kw6c8" [ba50146c-fdb4-4a0c-952c-2011fdf9979a] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003514667s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-szjgt" [116207e7-a6fb-42a6-a886-ee5b590f8666] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005800667s
addons_test.go:870: (dbg) Run:  kubectl --context addons-514000 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-514000 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-514000 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [88d09851-2704-47b8-a05e-63f6bb789167] Pending
helpers_test.go:344: "test-job-nginx-0" [88d09851-2704-47b8-a05e-63f6bb789167] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [88d09851-2704-47b8-a05e-63f6bb789167] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.005645792s
addons_test.go:906: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-darwin-arm64 -p addons-514000 addons disable volcano --alsologtostderr -v=1: (10.649328458s)
--- PASS: TestAddons/serial/Volcano (39.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-514000 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-514000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-514000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-514000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-514000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7a7fb6a3-677b-49af-a514-f1a72333254d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7a7fb6a3-677b-49af-a514-f1a72333254d] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.006137333s
I0926 17:26:47.337432    1597 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-514000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:309: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-darwin-arm64 -p addons-514000 addons disable ingress --alsologtostderr -v=1: (7.266647708s)
--- PASS: TestAddons/parallel/Ingress (17.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9rbgm" [39b26726-c988-485e-af26-48900aa73ca5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007022291s
addons_test.go:789: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-514000
addons_test.go:789: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-514000: (5.430139667s)
--- PASS: TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.239458ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-lp77z" [a135456e-4dc7-40b1-8fef-cd0581a32c60] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005237542s
addons_test.go:413: (dbg) Run:  kubectl --context addons-514000 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0926 17:26:10.491154    1597 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0926 17:26:10.493911    1597 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0926 17:26:10.493921    1597 kapi.go:107] duration metric: took 2.801916ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 2.809875ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-514000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-514000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [852722dd-6cc6-4876-a0bf-e65d15a4c30f] Pending
helpers_test.go:344: "task-pv-pod" [852722dd-6cc6-4876-a0bf-e65d15a4c30f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [852722dd-6cc6-4876-a0bf-e65d15a4c30f] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.011438208s
addons_test.go:528: (dbg) Run:  kubectl --context addons-514000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-514000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-514000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-514000 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-514000 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-514000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-514000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6ddc15dc-b612-45f4-bacb-b00af4edf713] Pending
helpers_test.go:344: "task-pv-pod-restore" [6ddc15dc-b612-45f4-bacb-b00af4edf713] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6ddc15dc-b612-45f4-bacb-b00af4edf713] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.008584375s
addons_test.go:570: (dbg) Run:  kubectl --context addons-514000 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-514000 delete pod task-pv-pod-restore: (1.08365675s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-514000 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-514000 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-darwin-arm64 -p addons-514000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.120775625s)
addons_test.go:586: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-514000 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-8t4xf" [21224a0e-014a-48a8-b17c-c4add4b09f5b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-8t4xf" [21224a0e-014a-48a8-b17c-c4add4b09f5b] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.011700875s
addons_test.go:777: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-darwin-arm64 -p addons-514000 addons disable headlamp --alsologtostderr -v=1: (5.290139833s)
--- PASS: TestAddons/parallel/Headlamp (18.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-t58gm" [bd10143d-9dd3-419d-ba7c-d53782ac97d6] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009187125s
addons_test.go:808: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-514000
--- PASS: TestAddons/parallel/CloudSpanner (5.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-514000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-514000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-514000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8294282f-451a-417e-9e61-401b10139593] Pending
helpers_test.go:344: "test-local-path" [8294282f-451a-417e-9e61-401b10139593] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8294282f-451a-417e-9e61-401b10139593] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8294282f-451a-417e-9e61-401b10139593] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.008968167s
addons_test.go:938: (dbg) Run:  kubectl --context addons-514000 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 ssh "cat /opt/local-path-provisioner/pvc-5c58b83f-e535-4b6e-8a9a-9b3242b1d8cf_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-514000 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-514000 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ggs9h" [1bbbb61c-d5bc-49d8-9d69-003bf5aac935] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00860025s
addons_test.go:1002: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-514000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-jjrmf" [1c0a689b-2890-43d6-a1ef-fb65d24aef0f] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.010228208s
addons_test.go:1014: (dbg) Run:  out/minikube-darwin-arm64 -p addons-514000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-darwin-arm64 -p addons-514000 addons disable yakd --alsologtostderr -v=1: (5.283032625s)
--- PASS: TestAddons/parallel/Yakd (10.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-514000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-514000: (12.210598375s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-514000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-514000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-514000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.11s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0926 17:56:24.519665    1597 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0926 17:56:24.519906    1597 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0926 17:56:26.458814    1597 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0926 17:56:26.459049    1597 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0926 17:56:26.459097    1597 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/001/docker-machine-driver-hyperkit
I0926 17:56:26.950391    1597 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x106e76d40 0x106e76d40 0x106e76d40 0x106e76d40 0x106e76d40 0x106e76d40 0x106e76d40] Decompressors:map[bz2:0x1400047f4f0 gz:0x1400047f4f8 tar:0x1400047f4a0 tar.bz2:0x1400047f4b0 tar.gz:0x1400047f4c0 tar.xz:0x1400047f4d0 tar.zst:0x1400047f4e0 tbz2:0x1400047f4b0 tgz:0x1400047f4c0 txz:0x1400047f4d0 tzst:0x1400047f4e0 xz:0x1400047f500 zip:0x1400047f510 zst:0x1400047f508] Getters:map[file:0x14001970410 http:0x14000692280 https:0x140006922d0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0926 17:56:26.950504    1597 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate4073921552/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.11s)

                                                
                                    
x
+
TestErrorSpam/setup (34.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-783000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-783000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 --driver=qemu2 : (34.902754916s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (34.90s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 status
--- PASS: TestErrorSpam/status (0.23s)

                                                
                                    
x
+
TestErrorSpam/pause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 pause
--- PASS: TestErrorSpam/pause (0.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 unpause
--- PASS: TestErrorSpam/unpause (0.59s)

                                                
                                    
x
+
TestErrorSpam/stop (55.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 stop: (3.196734541s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 stop: (26.034122417s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-783000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-783000 stop: (26.031798333s)
--- PASS: TestErrorSpam/stop (55.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19711-1075/.minikube/files/etc/test/nested/copy/1597/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-449000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-449000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m14.992897875s)
--- PASS: TestFunctional/serial/StartWithProxy (74.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0926 17:30:11.800213    1597 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-449000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-449000 --alsologtostderr -v=8: (42.244624083s)
functional_test.go:663: soft start took 42.245102708s for "functional-449000" cluster.
I0926 17:30:54.044045    1597 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (42.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-449000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-449000 cache add registry.k8s.io/pause:3.1: (1.060339708s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2340099182/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 cache add minikube-local-cache-test:functional-449000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-449000 cache add minikube-local-cache-test:functional-449000: (1.371371s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 cache delete minikube-local-cache-test:functional-449000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-449000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-449000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (72.339708ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 kubectl -- --context functional-449000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.70s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-449000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-449000 get pods: (1.012601917s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (280.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-449000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0926 17:31:35.138793    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:35.146573    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:35.159200    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:35.182645    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:35.226133    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:35.309620    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:35.473162    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:35.796910    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:36.440811    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:37.724668    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:40.288428    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:45.412115    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:31:55.652861    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:32:16.110943    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:32:57.065091    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:34:18.985276    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-449000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (4m40.193929292s)
functional_test.go:761: restart took 4m40.194022125s for "functional-449000" cluster.
I0926 17:35:41.366011    1597 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (280.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-449000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2015867226/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.21s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-449000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-449000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-449000: exit status 115 (152.778333ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31471 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-449000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-449000 config get cpus: exit status 14 (30.217958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-449000 config get cpus: exit status 14 (32.353542ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-449000 --alsologtostderr -v=1]
E0926 17:36:35.091741    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-449000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2816: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-449000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-449000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.631ms)

                                                
                                                
-- stdout --
	* [functional-449000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:36:33.728309    2799 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:36:33.728494    2799 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:36:33.728498    2799 out.go:358] Setting ErrFile to fd 2...
	I0926 17:36:33.728500    2799 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:36:33.728648    2799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:36:33.729675    2799 out.go:352] Setting JSON to false
	I0926 17:36:33.747239    2799 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2156,"bootTime":1727395237,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:36:33.747318    2799 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:36:33.751668    2799 out.go:177] * [functional-449000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0926 17:36:33.759566    2799 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:36:33.759606    2799 notify.go:220] Checking for updates...
	I0926 17:36:33.767529    2799 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:36:33.770493    2799 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:36:33.773508    2799 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:36:33.776609    2799 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:36:33.777895    2799 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:36:33.780847    2799 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:36:33.781106    2799 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:36:33.785562    2799 out.go:177] * Using the qemu2 driver based on existing profile
	I0926 17:36:33.790468    2799 start.go:297] selected driver: qemu2
	I0926 17:36:33.790474    2799 start.go:901] validating driver "qemu2" against &{Name:functional-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:36:33.790526    2799 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:36:33.797557    2799 out.go:201] 
	W0926 17:36:33.801422    2799 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0926 17:36:33.805578    2799 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-449000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-449000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-449000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.858834ms)

                                                
                                                
-- stdout --
	* [functional-449000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0926 17:36:33.946324    2810 out.go:345] Setting OutFile to fd 1 ...
	I0926 17:36:33.946437    2810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:36:33.946440    2810 out.go:358] Setting ErrFile to fd 2...
	I0926 17:36:33.946442    2810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0926 17:36:33.946567    2810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
	I0926 17:36:33.947836    2810 out.go:352] Setting JSON to false
	I0926 17:36:33.965072    2810 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2156,"bootTime":1727395237,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0926 17:36:33.965166    2810 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0926 17:36:33.969585    2810 out.go:177] * [functional-449000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0926 17:36:33.976588    2810 out.go:177]   - MINIKUBE_LOCATION=19711
	I0926 17:36:33.976649    2810 notify.go:220] Checking for updates...
	I0926 17:36:33.983586    2810 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	I0926 17:36:33.986513    2810 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0926 17:36:33.989564    2810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0926 17:36:33.992466    2810 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	I0926 17:36:33.995525    2810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0926 17:36:33.998861    2810 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0926 17:36:33.999142    2810 driver.go:394] Setting default libvirt URI to qemu:///system
	I0926 17:36:34.003503    2810 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0926 17:36:34.010507    2810 start.go:297] selected driver: qemu2
	I0926 17:36:34.010514    2810 start.go:901] validating driver "qemu2" against &{Name:functional-449000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-449000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0926 17:36:34.010608    2810 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0926 17:36:34.016512    2810 out.go:201] 
	W0926 17:36:34.019537    2810 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0926 17:36:34.023525    2810 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b49e09e2-0da2-4458-80d9-d87339f50e38] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0091665s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-449000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-449000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-449000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-449000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7352d91f-b6f2-4c23-b0c5-cd1d4ef45230] Pending
helpers_test.go:344: "sp-pod" [7352d91f-b6f2-4c23-b0c5-cd1d4ef45230] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7352d91f-b6f2-4c23-b0c5-cd1d4ef45230] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00536825s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-449000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-449000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-449000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0d1f6a4d-00a1-4189-9e42-5e8ab5914a58] Pending
helpers_test.go:344: "sp-pod" [0d1f6a4d-00a1-4189-9e42-5e8ab5914a58] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0d1f6a4d-00a1-4189-9e42-5e8ab5914a58] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009659792s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-449000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh -n functional-449000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 cp functional-449000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2597859085/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh -n functional-449000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh -n functional-449000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1597/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "sudo cat /etc/test/nested/copy/1597/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1597.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "sudo cat /etc/ssl/certs/1597.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1597.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "sudo cat /usr/share/ca-certificates/1597.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "sudo cat /etc/ssl/certs/15972.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "sudo cat /usr/share/ca-certificates/15972.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-449000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-449000 ssh "sudo systemctl is-active crio": exit status 1 (64.937ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-449000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-449000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-449000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-449000 image ls --format short --alsologtostderr:
I0926 17:36:42.983664    2838 out.go:345] Setting OutFile to fd 1 ...
I0926 17:36:42.983813    2838 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:36:42.983817    2838 out.go:358] Setting ErrFile to fd 2...
I0926 17:36:42.983819    2838 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:36:42.983939    2838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
I0926 17:36:42.984385    2838 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:36:42.984454    2838 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:36:42.985330    2838 ssh_runner.go:195] Run: systemctl --version
I0926 17:36:42.985338    2838 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/functional-449000/id_rsa Username:docker}
I0926 17:36:43.015668    2838 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-449000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/library/minikube-local-cache-test | functional-449000 | 879e8f5b0899f | 30B    |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| localhost/my-image                          | functional-449000 | 52a1032742b46 | 1.41MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/kicbase/echo-server               | functional-449000 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-449000 image ls --format table --alsologtostderr:
I0926 17:36:45.031221    2854 out.go:345] Setting OutFile to fd 1 ...
I0926 17:36:45.031390    2854 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:36:45.031394    2854 out.go:358] Setting ErrFile to fd 2...
I0926 17:36:45.031396    2854 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:36:45.031543    2854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
I0926 17:36:45.031976    2854 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:36:45.032039    2854 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:36:45.032898    2854 ssh_runner.go:195] Run: systemctl --version
I0926 17:36:45.032906    2854 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/functional-449000/id_rsa Username:docker}
I0926 17:36:45.060997    2854 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/09/26 17:36:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-449000 image ls --format json --alsologtostderr:
[{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1
dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-449000"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.
io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"52a1032742b461f8c73ba45ab9de5df88f49d5aae7c22339bcde14636e5c0abf","repoDigests":[],"repoTags":["localhost/my-image:functional-449000"],"size":"1410000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"879e8f5b0899f40cb75209366e4130c436eb7ea07529ba02d5808a1f4aa63e0d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-449000"],"size":"30"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"s
ize":"66000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-449000 image ls --format json --alsologtostderr:
I0926 17:36:44.958246    2849 out.go:345] Setting OutFile to fd 1 ...
I0926 17:36:44.958383    2849 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:36:44.958386    2849 out.go:358] Setting ErrFile to fd 2...
I0926 17:36:44.958389    2849 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:36:44.958510    2849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
I0926 17:36:44.958948    2849 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:36:44.959013    2849 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:36:44.959829    2849 ssh_runner.go:195] Run: systemctl --version
I0926 17:36:44.959837    2849 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/functional-449000/id_rsa Username:docker}
I0926 17:36:44.989521    2849 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-449000 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 879e8f5b0899f40cb75209366e4130c436eb7ea07529ba02d5808a1f4aa63e0d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-449000
size: "30"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-449000
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-449000 image ls --format yaml --alsologtostderr:
I0926 17:36:43.057727    2840 out.go:345] Setting OutFile to fd 1 ...
I0926 17:36:43.057931    2840 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:36:43.057935    2840 out.go:358] Setting ErrFile to fd 2...
I0926 17:36:43.057940    2840 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:36:43.058080    2840 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
I0926 17:36:43.058546    2840 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:36:43.058610    2840 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:36:43.059504    2840 ssh_runner.go:195] Run: systemctl --version
I0926 17:36:43.059512    2840 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/functional-449000/id_rsa Username:docker}
I0926 17:36:43.088127    2840 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-449000 ssh pgrep buildkitd: exit status 1 (62.467208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image build -t localhost/my-image:functional-449000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-449000 image build -t localhost/my-image:functional-449000 testdata/build --alsologtostderr: (1.687145584s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-449000 image build -t localhost/my-image:functional-449000 testdata/build --alsologtostderr:
I0926 17:36:43.196702    2844 out.go:345] Setting OutFile to fd 1 ...
I0926 17:36:43.196958    2844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:36:43.196961    2844 out.go:358] Setting ErrFile to fd 2...
I0926 17:36:43.196963    2844 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0926 17:36:43.197077    2844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19711-1075/.minikube/bin
I0926 17:36:43.197490    2844 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:36:43.198145    2844 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0926 17:36:43.198986    2844 ssh_runner.go:195] Run: systemctl --version
I0926 17:36:43.198994    2844 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19711-1075/.minikube/machines/functional-449000/id_rsa Username:docker}
I0926 17:36:43.227724    2844 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3906942323.tar
I0926 17:36:43.227793    2844 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0926 17:36:43.231094    2844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3906942323.tar
I0926 17:36:43.232440    2844 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3906942323.tar: stat -c "%s %y" /var/lib/minikube/build/build.3906942323.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3906942323.tar': No such file or directory
I0926 17:36:43.232454    2844 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3906942323.tar --> /var/lib/minikube/build/build.3906942323.tar (3072 bytes)
I0926 17:36:43.240924    2844 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3906942323
I0926 17:36:43.244348    2844 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3906942323 -xf /var/lib/minikube/build/build.3906942323.tar
I0926 17:36:43.247608    2844 docker.go:360] Building image: /var/lib/minikube/build/build.3906942323
I0926 17:36:43.247672    2844 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-449000 /var/lib/minikube/build/build.3906942323
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:52a1032742b461f8c73ba45ab9de5df88f49d5aae7c22339bcde14636e5c0abf done
#8 naming to localhost/my-image:functional-449000 done
#8 DONE 0.0s
I0926 17:36:44.840096    2844 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-449000 /var/lib/minikube/build/build.3906942323: (1.592456416s)
I0926 17:36:44.840184    2844 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3906942323
I0926 17:36:44.844320    2844 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3906942323.tar
I0926 17:36:44.847441    2844 build_images.go:217] Built localhost/my-image:functional-449000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3906942323.tar
I0926 17:36:44.847456    2844 build_images.go:133] succeeded building to: functional-449000
I0926 17:36:44.847460    2844 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.786335541s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-449000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-449000 docker-env) && out/minikube-darwin-arm64 status -p functional-449000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-449000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-449000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-449000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-7pxb2" [846ee69c-0054-4bce-8865-e661ea8cb517] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-7pxb2" [846ee69c-0054-4bce-8865-e661ea8cb517] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.008496792s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image load --daemon kicbase/echo-server:functional-449000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image load --daemon kicbase/echo-server:functional-449000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-449000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image load --daemon kicbase/echo-server:functional-449000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image save kicbase/echo-server:functional-449000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image rm kicbase/echo-server:functional-449000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-449000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 image save --daemon kicbase/echo-server:functional-449000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-449000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-449000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-449000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-449000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-449000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2665: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-449000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-449000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [77cf322d-059d-43e8-9c5d-fb32b1b8b3a7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [77cf322d-059d-43e8-9c5d-fb32b1b8b3a7] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.009706667s
I0926 17:36:03.270966    1597 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 service list -o json
functional_test.go:1494: Took "85.776041ms" to run "out/minikube-darwin-arm64 -p functional-449000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30820
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30820
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-449000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.113.46 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0926 17:36:03.363451    1597 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0926 17:36:03.400217    1597 config.go:182] Loaded profile config "functional-449000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-449000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "97.598834ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.776917ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "100.4485ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.492958ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1635653068/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727397385944286000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1635653068/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727397385944286000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1635653068/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727397385944286000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1635653068/001/test-1727397385944286000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (61.884042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 17:36:26.006648    1597 retry.go:31] will retry after 313.922664ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (85.109083ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 17:36:26.407953    1597 retry.go:31] will retry after 770.348116ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 00:36 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 00:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 00:36 test-1727397385944286000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh cat /mount-9p/test-1727397385944286000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-449000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a4a920ac-5679-4e9f-8669-8f1209d98630] Pending
helpers_test.go:344: "busybox-mount" [a4a920ac-5679-4e9f-8669-8f1209d98630] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a4a920ac-5679-4e9f-8669-8f1209d98630] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a4a920ac-5679-4e9f-8669-8f1209d98630] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.0036205s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-449000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1635653068/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2957990606/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.693125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 17:36:31.883809    1597 retry.go:31] will retry after 543.981319ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2957990606/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-449000 ssh "sudo umount -f /mount-9p": exit status 1 (64.817042ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-449000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2957990606/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052538646/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052538646/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052538646/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T" /mount1: exit status 1 (88.350458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0926 17:36:32.997317    1597 retry.go:31] will retry after 434.134343ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-449000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-449000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052538646/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052538646/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-449000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1052538646/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.80s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-449000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-449000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-449000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (174.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-380000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0926 17:37:02.823610    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/addons-514000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-380000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m54.404454125s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (174.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-380000 -- rollout status deployment/busybox: (3.645825083s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-hpk2q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-jcbsg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-pdh9q -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-hpk2q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-jcbsg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-pdh9q -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-hpk2q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-jcbsg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-pdh9q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-hpk2q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-hpk2q -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-jcbsg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-jcbsg -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-pdh9q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-380000 -- exec busybox-7dff88458-pdh9q -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-380000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-380000 -v=7 --alsologtostderr: (56.859358416s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-380000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp testdata/cp-test.txt ha-380000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1019514271/001/cp-test_ha-380000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000:/home/docker/cp-test.txt ha-380000-m02:/home/docker/cp-test_ha-380000_ha-380000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m02 "sudo cat /home/docker/cp-test_ha-380000_ha-380000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000:/home/docker/cp-test.txt ha-380000-m03:/home/docker/cp-test_ha-380000_ha-380000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m03 "sudo cat /home/docker/cp-test_ha-380000_ha-380000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000:/home/docker/cp-test.txt ha-380000-m04:/home/docker/cp-test_ha-380000_ha-380000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m04 "sudo cat /home/docker/cp-test_ha-380000_ha-380000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp testdata/cp-test.txt ha-380000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1019514271/001/cp-test_ha-380000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m02:/home/docker/cp-test.txt ha-380000:/home/docker/cp-test_ha-380000-m02_ha-380000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000 "sudo cat /home/docker/cp-test_ha-380000-m02_ha-380000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m02:/home/docker/cp-test.txt ha-380000-m03:/home/docker/cp-test_ha-380000-m02_ha-380000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m03 "sudo cat /home/docker/cp-test_ha-380000-m02_ha-380000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m02:/home/docker/cp-test.txt ha-380000-m04:/home/docker/cp-test_ha-380000-m02_ha-380000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m04 "sudo cat /home/docker/cp-test_ha-380000-m02_ha-380000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp testdata/cp-test.txt ha-380000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1019514271/001/cp-test_ha-380000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m03:/home/docker/cp-test.txt ha-380000:/home/docker/cp-test_ha-380000-m03_ha-380000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000 "sudo cat /home/docker/cp-test_ha-380000-m03_ha-380000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m03:/home/docker/cp-test.txt ha-380000-m02:/home/docker/cp-test_ha-380000-m03_ha-380000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m02 "sudo cat /home/docker/cp-test_ha-380000-m03_ha-380000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m03:/home/docker/cp-test.txt ha-380000-m04:/home/docker/cp-test_ha-380000-m03_ha-380000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m04 "sudo cat /home/docker/cp-test_ha-380000-m03_ha-380000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp testdata/cp-test.txt ha-380000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile1019514271/001/cp-test_ha-380000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m04:/home/docker/cp-test.txt ha-380000:/home/docker/cp-test_ha-380000-m04_ha-380000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m04 "sudo cat /home/docker/cp-test.txt"
E0926 17:40:48.285114    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:40:48.291806    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:40:48.304578    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000 "sudo cat /home/docker/cp-test_ha-380000-m04_ha-380000.txt"
E0926 17:40:48.326965    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
E0926 17:40:48.369038    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m04:/home/docker/cp-test.txt ha-380000-m02:/home/docker/cp-test_ha-380000-m04_ha-380000-m02.txt
E0926 17:40:48.452594    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m02 "sudo cat /home/docker/cp-test_ha-380000-m04_ha-380000-m02.txt"
E0926 17:40:48.615359    1597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19711-1075/.minikube/profiles/functional-449000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 cp ha-380000-m04:/home/docker/cp-test.txt ha-380000-m03:/home/docker/cp-test_ha-380000-m04_ha-380000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-380000 ssh -n ha-380000-m03 "sudo cat /home/docker/cp-test_ha-380000-m04_ha-380000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.997630917s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-992000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-992000 --output=json --user=testUser: (3.340933167s)
--- PASS: TestJSONOutput/stop/Command (3.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-171000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-171000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.950959ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6e15842a-ac8c-4d1c-96fd-246522e71305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-171000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"affa8bf9-056d-4ca1-9958-4a9321668b51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"f714164e-67b5-400a-bc98-eb3a81472dd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig"}}
	{"specversion":"1.0","id":"f8d995ca-cedc-4fe1-a861-9162b62d03c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"985e7e2e-fcd6-4f8e-a3d1-b3b199724eb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1c67eddd-46e5-43af-8e19-a184556b64c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube"}}
	{"specversion":"1.0","id":"32c4b54a-6b67-4b49-8d6d-846271033a6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"33ae723d-df32-4b9e-ae9c-298377eb2688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-171000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-171000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-843000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-843000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (108.981875ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-843000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19711-1075/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19711-1075/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-843000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-843000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.496292ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-843000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-843000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.77255525s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.74244025s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-843000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-843000: (3.406878541s)
--- PASS: TestNoKubernetes/serial/Stop (3.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-843000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-843000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.580042ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-843000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-843000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-211000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-187000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-187000 --alsologtostderr -v=3: (1.902636541s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-187000 -n old-k8s-version-187000: exit status 7 (47.440959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-187000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-917000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-917000 --alsologtostderr -v=3: (1.921403708s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-917000 -n embed-certs-917000: exit status 7 (48.843792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-917000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-421000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-421000 --alsologtostderr -v=3: (2.024310042s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-421000 -n no-preload-421000: exit status 7 (59.39325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-421000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-369000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-369000 --alsologtostderr -v=3: (2.122626709s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-369000 -n default-k8s-diff-port-369000: exit status 7 (55.469459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-369000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-620000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-620000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-620000 --alsologtostderr -v=3: (3.348658541s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000: exit status 7 (55.6615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-620000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-790000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-790000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-790000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-790000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-790000"

                                                
                                                
----------------------- debugLogs end: cilium-790000 [took: 2.185878083s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-790000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-790000
--- SKIP: TestNetworkPlugins/group/cilium (2.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-281000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-281000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard